CN113609942A - Road intelligent monitoring system based on multi-view and multi-spectral fusion - Google Patents
Road intelligent monitoring system based on multi-view and multi-spectral fusion Download PDFInfo
- Publication number
- CN113609942A CN113609942A CN202110849217.XA CN202110849217A CN113609942A CN 113609942 A CN113609942 A CN 113609942A CN 202110849217 A CN202110849217 A CN 202110849217A CN 113609942 A CN113609942 A CN 113609942A
- Authority
- CN
- China
- Prior art keywords
- road
- fusion
- image
- intelligent
- monitoring system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 121
- 238000012544 monitoring process Methods 0.000 title claims abstract description 39
- 238000012545 processing Methods 0.000 claims abstract description 45
- 238000003384 imaging method Methods 0.000 claims abstract description 32
- 238000001228 spectrum Methods 0.000 claims abstract description 24
- 230000004438 eyesight Effects 0.000 claims abstract description 17
- 238000003331 infrared imaging Methods 0.000 claims description 14
- 238000005259 measurement Methods 0.000 claims description 13
- 238000012937 correction Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 abstract description 7
- 238000013528 artificial neural network Methods 0.000 abstract description 5
- 238000001514 detection method Methods 0.000 description 7
- 238000005286 illumination Methods 0.000 description 5
- 230000004297 night vision Effects 0.000 description 4
- 241001465754 Metazoa Species 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 230000035515 penetration Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000004313 glare Effects 0.000 description 1
- 238000000034 method Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J3/2823—Imaging spectrometer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J3/2823—Imaging spectrometer
- G01J2003/2826—Multispectral imaging, e.g. filter imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a road intelligent monitoring system based on multi-view multi-spectral fusion, which comprises: a fusion camera comprising a plurality of imaging channels and an image fusion processor; the imaging channels are used for acquiring road images of a plurality of spectrums; the image fusion processor is used for carrying out image fusion on the acquired road images with the plurality of spectrums; the comprehensive processing unit is connected with the image fusion processor and used for processing the fused road image by executing an algorithm and outputting a road identification result and a label; and the application system is connected with the comprehensive processing unit and is used for monitoring the road according to the road identification result and the label. The multispectral fusion image, binocular stereo vision and the intelligent recognition technology based on the neural network are combined, and the three-dimensional contour, distance, speed, vehicle type, license plate and vehicle body of the road target can be recorded all day long.
Description
Technical Field
The invention relates to road monitoring, in particular to an intelligent road monitoring system based on multi-view multi-spectral fusion.
Background
The visible light camera acquires visible light images, license plate numbers and the like of road targets, and the millimeter wave distance and speed measuring radar acquires road target distance, speed and the like. In order to further acquire the fine three-dimensional contour information of the road target, the laser radar technology is also gradually applied to road monitoring.
However, in practical road monitoring application, the millimeter wave distance and speed measuring radar cannot precisely measure the three-dimensional profile of a road target, and the visible light camera has the problems of poor night vision capability, poor adaptability to severe rain and fog weather and the like. Although the emerging laser radar technology can accurately obtain the three-dimensional profile information of a target, the problems of poor adaptability to severe weather, high price, short service life caused by moving parts and the like still exist. Therefore, it is necessary to develop road monitoring equipment with low cost, all-day time, long service life and good adaptability to severe weather, and the functions of road target ranging, speed measurement, target texture, three-dimensional contour, license plate recognition and the like can be simultaneously covered.
Disclosure of Invention
In view of the existing defects, the invention provides an intelligent road monitoring system based on multi-view multi-spectral fusion, which combines multi-spectral fusion images, binocular stereo vision and an intelligent recognition technology based on a neural network and can record three-dimensional outlines, distances, speeds, vehicle types, license plates and vehicle bodies of road targets all day long.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
a road intelligent monitoring system based on multi-view multispectral fusion comprises:
a fusion camera comprising a plurality of imaging channels and an image fusion processor;
the imaging channels are used for acquiring road images of a plurality of spectrums;
the image fusion processor is used for carrying out image fusion on the acquired road images with the plurality of spectrums;
the comprehensive processing unit is connected with the image fusion processor and used for processing the fused road image by executing an algorithm and outputting a road identification result and a label;
and the application system is connected with the comprehensive processing unit and is used for monitoring the road according to the road identification result and the label.
In accordance with one aspect of the invention, the plurality of imaging channels includes a thermal infrared imaging sensor and a low light or short wave imaging sensor.
According to one aspect of the invention, the thermal infrared imaging sensor acquires a thermal infrared image of a road target, and represents the temperature field distribution conditions of different objects; the glimmer or shortwave imaging sensor acquires a road shortwave infrared image or a visible glimmer image.
According to one aspect of the invention, the image fusion processor carries out denoising enhancement and distortion correction on the road target thermal infrared image, and carries out ISP preprocessing on the road short wave infrared image or the visible low light image.
According to one aspect of the invention, the image fusion processor performs matching fusion on the processed road target thermal infrared image and the road short wave infrared image or the visible dim light image to generate a multi-purpose two-color fusion image.
According to one aspect of the invention, the comprehensive processing unit performs multi-view stereoscopic vision processing based on the multi-view two-color fusion image, produces a depth image, and converts the depth image into a three-dimensional point cloud image.
According to an aspect of the present invention, the integrated processing unit executes a resolution up-conversion algorithm based on the multi-view two-color fusion image to generate a super-resolution two-color fusion image.
According to one aspect of the invention, the comprehensive processing unit synthesizes the super-resolution two-color fusion image and the three-dimensional point cloud image to execute a multi-view stereoscopic vision algorithm to realize high-reliability road target intelligent identification, classification and measurement; and executing a semantic segmentation algorithm of the super-resolution two-color fusion image, retaining road and road related target information, eliminating irrelevant elements, and marking the target intelligent identification and measurement result in the super-resolution two-color fusion image for output.
According to one aspect of the invention, the road identification system further comprises a display screen for displaying the road identification result and the label.
According to one aspect of the invention, a support structure is also included for supporting the fusion camera to maintain a stable baseline.
The implementation of the invention has the advantages that: the invention relates to a road intelligent monitoring system based on multi-view multi-spectral fusion, which comprises: a fusion camera comprising a plurality of imaging channels and an image fusion processor; the imaging channels are used for acquiring road images of a plurality of spectrums; the image fusion processor is used for carrying out image fusion on the acquired road images with the plurality of spectrums; the comprehensive processing unit is connected with the image fusion processor and used for processing the fused road image by executing an algorithm and outputting a road identification result and a label; and the application system is connected with the comprehensive processing unit and is used for monitoring the road according to the road identification result and the label. The thermal infrared imaging does not depend on reflected light, represents the temperature field distribution conditions of different objects, can detect under severe illumination and weather conditions, and particularly has outstanding detection capability for strong high-contrast thermal targets such as pedestrians, animals, running vehicles and the like. The low-light-level imaging can achieve good imaging effect and high resolution under dark and low-light-level line components at night, and the short-wave infrared imaging can give consideration to clear expression of target detail textures and certain night vision and fog penetration capability. The fusion of thermal infrared with low-light or short-wave infrared sensors provides richer, more reliable, more accurate, more intelligent passive visual capabilities throughout the day and under severe lighting and weather conditions. The system can obtain high-density three-dimensional point cloud all day long and under severe weather conditions through a multi-view stereo vision algorithm. The system has double-light fusion and stereoscopic vision capabilities at the same time, so that the accuracy and reliability of typical road target identification and analysis are effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a structural framework diagram of a road intelligent monitoring system based on multi-view multi-spectral fusion according to the present invention;
FIG. 2 is a frame diagram of a workflow of an intelligent road monitoring system based on multi-view multi-spectral fusion according to the present invention;
FIG. 3 is a schematic view of a fused camera according to the present invention;
fig. 4 is a schematic structural diagram of a road intelligent monitoring system based on multi-view multi-spectral fusion according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, fig. 2, fig. 3 and fig. 4, an intelligent road monitoring system based on multi-view multispectral fusion includes:
a fusion camera comprising a plurality of imaging channels and an image fusion processor;
in practical application, the device also comprises a support structure for supporting the fusion camera and maintaining the stable baseline.
In practical application, the fusion camera is a dual-spectrum fusion camera, two, three or more dual-spectrum fusion cameras are installed on a unified support structure, the support structure is high in strength, stability of baselines among the multiple cameras is guaranteed, the length of the baselines among the multiple cameras is adjusted according to the requirement of monitoring working distance, the farthest observation distance is set within the range of 50-400 m, and the visual axes of the multiple cameras observe the same scene in parallel.
The imaging channels are used for acquiring road images of a plurality of spectrums;
in practical applications, the plurality of imaging channels includes a thermal infrared imaging sensor and a low light or short wave imaging sensor.
In practical application, the thermal infrared imaging sensor acquires a road target thermal infrared image and represents the temperature field distribution conditions of different objects; the glimmer or shortwave imaging sensor acquires a road shortwave infrared image or a visible glimmer image.
In practical application, the infrared imaging sensor is an imaging channel of the dual-spectrum fusion camera, the thermal infrared imaging sensor acquires a thermal infrared image of a road target, the temperature field distribution conditions of different objects are represented, detection can be performed under severe illumination and weather conditions, and the detection capability is particularly outstanding for strong high-contrast thermal targets such as pedestrians, animals and running vehicles.
In practical application, shimmer or shortwave imaging sensor are another imaging channel of dual spectrum fusion camera, but optional shortwave infrared imaging sensor or visible light shimmer imaging sensor, and visible shimmer formation of image and shortwave infrared camera respectively have the advantage, can select according to practical application, and visible shimmer is imaged effectually and the resolution ratio is higher under dark dim light line spare at night, and shortwave infrared imaging compromises the clear expression of target detail texture and certain night vision and fog-penetrating ability simultaneously.
The image fusion processor is used for carrying out image fusion on the acquired road images with the plurality of spectrums;
in practical application, the image fusion processor performs denoising enhancement and distortion correction on a road target thermal infrared image, and performs ISP preprocessing on a road short wave infrared image or a visible low light image.
In practical application, the image fusion processor performs matching fusion on the processed road target thermal infrared image and the road short wave infrared image or the visible low-light-level image to generate a multi-purpose two-color fusion image.
In practical application, an image fusion processor in the dual-spectrum fusion camera controls a plurality of imaging channels to be synchronously exposed, receives output videos of the plurality of imaging channels, achieves image denoising enhancement and image distortion correction on a road target thermal infrared image, further achieves ISP preprocessing on a road short wave infrared image or a visible low light image, then matches and fuses the two images, and finally outputs a two-color fusion image.
The comprehensive processing unit is connected with the image fusion processor and used for processing the fused road image by executing an algorithm and outputting a road identification result and a label;
in practical application, the comprehensive processing unit synchronously controls the synchronous exposure of the multi-part dual-spectrum fusion camera and controls the working state of the multi-part dual-spectrum fusion camera. The comprehensive processing unit receives the two-color fusion images of the multi-part double-spectrum fusion camera, executes various algorithms such as a multi-view stereoscopic vision algorithm, road target identification classification and display, and finally outputs the identification result and the marked video stream.
In practical application, the comprehensive processing unit performs multi-view stereoscopic vision processing based on the multi-view two-color fusion image, produces a depth image and converts the depth image into a three-dimensional point cloud image.
In practical application, the comprehensive processing unit executes a resolution improvement algorithm based on the multi-view two-color fusion image to generate a super-resolution two-color fusion image.
In practical application, the comprehensive processing unit synthesizes the super-resolution two-color fusion image and the three-dimensional point cloud image to execute a multi-view stereoscopic vision algorithm to realize high-reliability road target intelligent identification, classification and measurement; and executing a semantic segmentation algorithm of the super-resolution two-color fusion image, retaining road and road related target information, eliminating irrelevant elements, and marking the target intelligent identification and measurement result in the super-resolution two-color fusion image for output.
In practical application, the work flow of the comprehensive processing unit is as follows: according to the triangulation principle, the comprehensive data processing unit utilizes a multi-mesh thermal infrared image to produce infrared three-dimensional point cloud data, utilizes a multi-mesh glimmer or short wave image to produce glimmer or short wave three-dimensional point cloud data, and then fuses the infrared three-dimensional point cloud data and the glimmer or short wave three-dimensional point cloud data to produce high-reliability bicolor three-dimensional point cloud data. Based on the two-color three-dimensional point cloud data, the comprehensive data processing unit calculates target distances and target three-dimensional contours of various vehicles, pedestrians and the like in real time and further indirectly calculates target speed; the calibrated multi-part double-spectrum fusion camera observes the same scene, and after correction, interpolation, registration and fusion, the super-resolution double-color fusion image is calculated by the comprehensive data processing unit; the comprehensive data processing unit has the neural network accelerated computing capacity, combines the two-color fusion image data and the three-dimensional data, and realizes high-reliability road semantic segmentation, vehicle type recognition, pedestrian classification and the like through methods such as deep learning. The comprehensive data processing unit executes algorithms such as semantic segmentation and the like, retains relevant information of roads and road targets, and labels key road information (including distance, speed, three-dimensional size, typing, license plate and the like) such as vehicles, pedestrians and the like in the super-resolution fusion image.
And the application system is connected with the comprehensive processing unit and is used for monitoring the road according to the road identification result and the label.
In practical application, the application system is divided into an intelligent traffic terminal and a vehicle-road cooperative terminal according to different requirements of users. The result output by the comprehensive processing unit is uploaded to the intelligent traffic terminal for vehicle management and control; and the result output by the comprehensive processing unit is uploaded to a vehicle-road cooperative terminal for distributing road vehicle information to each intelligent driving vehicle terminal.
And the display screen is used for displaying the road identification result and the mark output by the comprehensive processing unit and is convenient to check.
In practical application, the structure of the dual-spectrum fusion camera is as shown in fig. 3, and the dual-spectrum fusion camera is composed of a thermal infrared lens, a micro-lens, a thermal infrared interface power panel, a micro-light ISP image processing board, a thermal infrared ISP image processing board, a structural shell and the like, and when the dual-spectrum fusion camera is installed on a support structure, a sun shield needs to be installed.
In practical application, as shown in fig. 4, the two parts are mounted on a support structure to form a four-mesh two-color fusion target three-dimensional measurement and intelligent identification monitoring system, and the two fixed mounting points are arranged in the middle.
In practical application, as shown in fig. 2, a workflow of the system is shown, specifically:
p1: denoising, enhancing and distortion correcting the thermal infrared image;
p2: preprocessing a short wave infrared image or a visible low-light ISP;
p3: matching and fusing the thermal infrared image with the wave infrared image or the visible low-light image;
p4: producing a depth image based on multi-view stereoscopic vision processing of the multi-view two-color fusion image;
p5: converting the depth image into a three-dimensional point cloud image;
p6: based on the matching fusion processing of the multi-view arrayed fusion image, the super-resolution improvement of the fusion image is realized;
p7: the double-color fusion image and the three-dimensional point cloud image are integrated to realize high-reliability road target intelligent identification, classification and measurement;
p8: and performing semantic segmentation on the super-resolution two-color fusion image, retaining road and road related target information, removing irrelevant elements, and marking the target intelligent identification and measurement result in the fusion image for display output or recording.
In practical application, the system has the advantages that:
(1) the system is a multi-view three-dimensional vision system based on a multi-spectral fusion camera, realizes accurate and reliable road target detection under severe illumination and weather conditions (including complete darkness, rain and snow, fog, haze, glare and the like), and solves the challenges of road complex differentiated target detection and reliable work under extreme weather and illumination conditions faced by the conventional vision system.
(2) The system is capable of generating accurate three-dimensional point cloud data, thereby providing rich and accurate pixel-by-pixel target three-dimensional information, and the use of the technology provides a passive, economical and efficient solution without a radiation source for generating high-resolution point cloud three-dimensions.
(3) The single-purpose road-based target recognition depends on a deep neural network technology in the past, the extreme condition that an unknown object exists in a training network can be met, and the system combines target double-light fusion information, target three-dimensional information and the deep neural network to provide more accurate target detection and classification.
(4) The multi-view multispectral camera in the system observes the same scene after being calibrated, and the on-site comprehensive data processing unit can calculate out an ultra-resolution image after correction, interpolation, registration and fusion. The use of the technology provides a reliable solution for producing high-resolution images, and the road monitoring display effect is powerfully improved.
(5) The system integrates various monitoring and measuring functions into a whole, does not need to be matched with other monitoring systems, can independently complete vehicle body images and license plate input, complete vehicle three-dimensional size measurement and vehicle type classification, realize vehicle position, distance and speed measurement, and give consideration to measurement, identification and marking of other road targets such as pedestrians.
(6) Compared with a laser radar three-dimensional vision system applied to road monitoring, the system has longer service life without moving parts and lower price without expensive light source parts. Compared with the millimeter wave radar applied to road monitoring, the system has higher imaging resolution and better target identification capability.
The implementation of the invention has the advantages that: the invention relates to a road intelligent monitoring system based on multi-view multi-spectral fusion, which comprises: a fusion camera comprising a plurality of imaging channels and an image fusion processor; the imaging channels are used for acquiring road images of a plurality of spectrums; the image fusion processor is used for carrying out image fusion on the acquired road images with the plurality of spectrums; the comprehensive processing unit is connected with the image fusion processor and used for processing the fused road image by executing an algorithm and outputting a road identification result and a label; and the application system is connected with the comprehensive processing unit and is used for monitoring the road according to the road identification result and the label. The thermal infrared imaging does not depend on reflected light, represents the temperature field distribution conditions of different objects, can detect under severe illumination and weather conditions, and particularly has outstanding detection capability for strong high-contrast thermal targets such as pedestrians, animals, running vehicles and the like. The low-light-level imaging can achieve good imaging effect and high resolution under dark and low-light-level line components at night, and the short-wave infrared imaging can give consideration to clear expression of target detail textures and certain night vision and fog penetration capability. The fusion of thermal infrared with low-light or short-wave infrared sensors provides richer, more reliable, more accurate, more intelligent passive visual capabilities throughout the day and under severe lighting and weather conditions. The system can obtain high-density three-dimensional point cloud all day long and under severe weather conditions through a multi-view stereo vision algorithm. The system has double-light fusion and stereoscopic vision capabilities at the same time, so that the accuracy and reliability of typical road target identification and analysis are effectively improved.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention disclosed herein are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (10)
1. A road intelligent monitoring system based on multi-view multispectral fusion is characterized by comprising:
a fusion camera comprising a plurality of imaging channels and an image fusion processor;
the imaging channels are used for acquiring road images of a plurality of spectrums;
the image fusion processor is used for carrying out image fusion on the acquired road images with the plurality of spectrums;
the comprehensive processing unit is connected with the image fusion processor and used for processing the fused road image by executing an algorithm and outputting a road identification result and a label;
and the application system is connected with the comprehensive processing unit and is used for monitoring the road according to the road identification result and the label.
2. The multi-purpose multispectral fusion based intelligent road monitoring system according to claim 1, wherein the plurality of imaging channels comprise a thermal infrared imaging sensor and a low-light or short-wave imaging sensor.
3. The intelligent road monitoring system based on multi-purpose multispectral fusion as claimed in claim 2, wherein the thermal infrared imaging sensor acquires thermal infrared images of road targets, and characterizes temperature field distribution of different objects; the glimmer or shortwave imaging sensor acquires a road shortwave infrared image or a visible glimmer image.
4. The intelligent road monitoring system based on multi-purpose multispectral fusion as claimed in claim 3, wherein the image fusion processor performs denoising enhancement and distortion correction on the road target thermal infrared image, and performs ISP preprocessing on the road short wave infrared image or the visible dim light image.
5. The intelligent road monitoring system based on multi-purpose multi-spectral fusion of claim 4, wherein the image fusion processor performs matching fusion on the processed road target thermal infrared image and the road short wave infrared image or the visible low light image to generate a multi-purpose two-color fusion image.
6. The intelligent road monitoring system based on multi-view multi-spectral fusion of claim 5, wherein the comprehensive processing unit performs multi-view stereo vision processing based on multi-view two-color fusion images, generates depth images, and converts the depth images into three-dimensional point cloud images.
7. The intelligent road monitoring system based on multi-view multi-spectral fusion of claim 6, wherein the comprehensive processing unit executes a resolution enhancement algorithm based on the multi-view two-color fusion image to generate a super-resolution two-color fusion image.
8. The intelligent road monitoring system based on multi-view multi-spectral fusion of claim 7, wherein the comprehensive processing unit synthesizes the super-resolution two-color fusion image and the three-dimensional point cloud image to execute a multi-view stereo vision algorithm to realize the intelligent identification, classification and measurement of high-reliability road targets; and executing a semantic segmentation algorithm of the super-resolution two-color fusion image, retaining road and road related target information, eliminating irrelevant elements, and marking the target intelligent identification and measurement result in the super-resolution two-color fusion image for output.
9. The intelligent road monitoring system based on multi-purpose multispectral fusion as claimed in claim 1, further comprising a display screen for displaying road identification results and labels.
10. The multi-purpose multispectral fusion-based intelligent road monitoring system of claim 1, further comprising a support structure for supporting the fusion camera to maintain a stable baseline.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110849217.XA CN113609942B (en) | 2021-07-27 | 2021-07-27 | Road intelligent monitoring system based on multi-view and multi-spectral fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110849217.XA CN113609942B (en) | 2021-07-27 | 2021-07-27 | Road intelligent monitoring system based on multi-view and multi-spectral fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113609942A true CN113609942A (en) | 2021-11-05 |
CN113609942B CN113609942B (en) | 2022-11-22 |
Family
ID=78305538
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110849217.XA Active CN113609942B (en) | 2021-07-27 | 2021-07-27 | Road intelligent monitoring system based on multi-view and multi-spectral fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113609942B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114757854A (en) * | 2022-06-15 | 2022-07-15 | 深圳市安星数字系统有限公司 | Night vision image quality improving method, device and equipment based on multispectral analysis |
CN117132519A (en) * | 2023-10-23 | 2023-11-28 | 江苏华鲲振宇智能科技有限责任公司 | Multi-sensor image fusion processing module based on VPX bus |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN203134149U (en) * | 2012-12-11 | 2013-08-14 | 武汉高德红外股份有限公司 | Vehicle auxiliary driving system based on different wave band imaging fusion image processing |
CN103390281A (en) * | 2013-07-29 | 2013-11-13 | 西安科技大学 | Double-spectrum night vision instrument vehicle-mounted system and double-spectrum fusion design method |
CN105447838A (en) * | 2014-08-27 | 2016-03-30 | 北京计算机技术及应用研究所 | Method and system for infrared and low-level-light/visible-light fusion imaging |
WO2020103533A1 (en) * | 2018-11-20 | 2020-05-28 | 中车株洲电力机车有限公司 | Track and road obstacle detecting method |
CN113012469A (en) * | 2021-03-16 | 2021-06-22 | 浙江亚太机电股份有限公司 | Intelligent traffic early warning system based on target recognition |
-
2021
- 2021-07-27 CN CN202110849217.XA patent/CN113609942B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN203134149U (en) * | 2012-12-11 | 2013-08-14 | 武汉高德红外股份有限公司 | Vehicle auxiliary driving system based on different wave band imaging fusion image processing |
CN103390281A (en) * | 2013-07-29 | 2013-11-13 | 西安科技大学 | Double-spectrum night vision instrument vehicle-mounted system and double-spectrum fusion design method |
CN105447838A (en) * | 2014-08-27 | 2016-03-30 | 北京计算机技术及应用研究所 | Method and system for infrared and low-level-light/visible-light fusion imaging |
WO2020103533A1 (en) * | 2018-11-20 | 2020-05-28 | 中车株洲电力机车有限公司 | Track and road obstacle detecting method |
CN113012469A (en) * | 2021-03-16 | 2021-06-22 | 浙江亚太机电股份有限公司 | Intelligent traffic early warning system based on target recognition |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114757854A (en) * | 2022-06-15 | 2022-07-15 | 深圳市安星数字系统有限公司 | Night vision image quality improving method, device and equipment based on multispectral analysis |
CN114757854B (en) * | 2022-06-15 | 2022-09-02 | 深圳市安星数字系统有限公司 | Night vision image quality improving method, device and equipment based on multispectral analysis |
CN117132519A (en) * | 2023-10-23 | 2023-11-28 | 江苏华鲲振宇智能科技有限责任公司 | Multi-sensor image fusion processing module based on VPX bus |
CN117132519B (en) * | 2023-10-23 | 2024-03-12 | 江苏华鲲振宇智能科技有限责任公司 | Multi-sensor image fusion processing module based on VPX bus |
Also Published As
Publication number | Publication date |
---|---|
CN113609942B (en) | 2022-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Huang et al. | The apolloscape dataset for autonomous driving | |
CN111951305B (en) | Target detection and motion state estimation method based on vision and laser radar | |
Choi et al. | KAIST multi-spectral day/night data set for autonomous and assisted driving | |
CN106650708B (en) | Automatic driving obstacle vision detection method and system | |
CN105711597B (en) | Front locally travels context aware systems and method | |
CN112215306B (en) | Target detection method based on fusion of monocular vision and millimeter wave radar | |
CN107133559B (en) | Mobile object detection method based on 360 degree of panoramas | |
US7366325B2 (en) | Moving object detection using low illumination depth capable computer vision | |
KR101364727B1 (en) | Method and apparatus for detecting fog using the processing of pictured image | |
CN113609942B (en) | Road intelligent monitoring system based on multi-view and multi-spectral fusion | |
CN104881645B (en) | The vehicle front mesh object detection method of feature based point mutual information and optical flow method | |
CN103390281A (en) | Double-spectrum night vision instrument vehicle-mounted system and double-spectrum fusion design method | |
Jiang et al. | Target detection algorithm based on MMW radar and camera fusion | |
CN104601953A (en) | Video image fusion-processing system | |
WO2023155483A1 (en) | Vehicle type identification method, device, and system | |
CN114114312A (en) | Three-dimensional target detection method based on fusion of multi-focal-length camera and laser radar | |
Diaz-Ruiz et al. | Ithaca365: Dataset and driving perception under repeated and challenging weather conditions | |
CN113643345A (en) | Multi-view road intelligent identification method based on double-light fusion | |
CN114966696A (en) | Transformer-based cross-modal fusion target detection method | |
CN106803073B (en) | Auxiliary driving system and method based on stereoscopic vision target | |
US20230177724A1 (en) | Vehicle to infrastructure extrinsic calibration system and method | |
Hosseini et al. | A system design for automotive augmented reality using stereo night vision | |
CN116794650A (en) | Millimeter wave radar and camera data fusion target detection method and device | |
CN110717457A (en) | Pedestrian pose calculation method for vehicle | |
CN102722724B (en) | Vehicle-mounted night view system having target identification function and target identification method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |