CN113792793A - Road video monitoring and improving method in adverse meteorological environment - Google Patents

Road video monitoring and improving method in adverse meteorological environment Download PDF

Info

Publication number
CN113792793A
CN113792793A CN202111080366.0A CN202111080366A CN113792793A CN 113792793 A CN113792793 A CN 113792793A CN 202111080366 A CN202111080366 A CN 202111080366A CN 113792793 A CN113792793 A CN 113792793A
Authority
CN
China
Prior art keywords
image data
road
image
meteorological
days
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111080366.0A
Other languages
Chinese (zh)
Other versions
CN113792793B (en
Inventor
齐树平
王志斌
邱文利
许忠印
权恒友
冯雷
张少波
杨海峰
高新文
刘鹏祥
张莹
王洪涛
刘栋
郝文世
孙乙博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Xiong'an Jingde Expressway Co ltd
Original Assignee
Hebei Xiong'an Jingde Expressway Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Xiong'an Jingde Expressway Co ltd filed Critical Hebei Xiong'an Jingde Expressway Co ltd
Priority to CN202111080366.0A priority Critical patent/CN113792793B/en
Publication of CN113792793A publication Critical patent/CN113792793A/en
Application granted granted Critical
Publication of CN113792793B publication Critical patent/CN113792793B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a road monitoring and improving method in adverse meteorological environment, belonging to the field of road video monitoring, comprising the steps of obtaining original image data of a road in the existing adverse meteorological environment, inputting the original image data into a multi-model image enhancing unit, and obtaining enhanced image data; and introducing a multi-sensing strategy, performing fusion judgment on the measured data set N and the actual data set P, and outputting a final set A. The method and the device realize video monitoring under different adverse weather environment scenes based on multi-model image enhancement, improve the robustness of the system by introducing a multi-perception strategy, improve the video quality of road monitoring under adverse weather such as rainy days, snowy days and foggy days, and reduce the influence of the adverse environmental factors on an image identification technology.

Description

Road video monitoring and improving method in adverse meteorological environment
Technical Field
The invention belongs to the field of road video monitoring, relates to a road monitoring and lifting method, and particularly relates to a road monitoring and lifting method in a bad meteorological environment.
Background
A monitoring system based on video detection technology is a computer processing system which utilizes image processing and pattern recognition methods to realize the detection and recognition of traffic targets. With the maturity of video monitoring systems, more and more scenes are applied, the video monitoring systems are applied to traffic, traffic targets such as vehicles and pedestrians can be detected, positioned, identified and tracked by analyzing traffic images captured by cameras, and the detected, tracked and identified targets are analyzed and distinguished in traffic behaviors, so that the calculation and the collection of various traffic flow data are completed, various adjustments and controls related to traffic management are performed at the same time, and intelligent traffic management is realized.
However, in rainy days, snowy days and foggy days, the probability of traffic accidents is much higher than that in normal days, but in such an environment, the video monitoring system is difficult to meet the actual application requirements, and the identification performance of the video monitoring system still needs to be improved continuously. At present, a conventional single model method is usually used for a video monitoring system under rainy, snowy and foggy days, namely, a video monitoring system capable of identifying vehicles under adverse environments such as rainy, snowy and foggy days is trained through a neural network. However, the following disadvantages still exist in the prior art:
1. images at the same view angle show different performances in different meteorological environments, so that an ideal monitoring effect cannot be achieved by a monitoring system using a single model;
2. external information cannot be introduced for auxiliary judgment, and the robustness is poor.
Disclosure of Invention
In order to solve the problems, the invention designs a road monitoring and lifting method in the adverse meteorological environment, and improves the monitoring effect in the adverse meteorological environment by introducing a multi-perception strategy and multi-model image enhancement.
The technical scheme adopted by the invention is that,
a road video monitoring and improving method under adverse meteorological environment comprises the following steps,
step 1: a multi-model image enhancement unit for constructing a rainy-day image enhancement model, a foggy-day image enhancement model and a snowy-day image enhancement model and combining the three models to serve as image data;
step 2: acquiring original image data of a road in the existing adverse meteorological environment, and inputting the original image data into a multi-model image enhancement unit to obtain enhanced image data;
and step 3: training a meteorological identification model by using the original image data and the enhanced image data as training data;
and 4, step 4: acquiring real-time image data of a road in a current meteorological environment, inputting the real-time image data into a meteorological identification model, and obtaining a measurement set N ═[ N1, N2, N3, N4], wherein N1, N2, N3 and N4 respectively represent rainy day probability, snowy day probability, foggy day probability and other weather probability, and the sum of N1, N2, N3 and N4 is 1;
and 5: acquiring current meteorological data to obtain an actual set P ═ P1, P2, P3, P4], wherein P1, P2, P3 and P4 respectively represent rainy days, snowy days, foggy days and other weather, and P1, P2, P3 and P4 are all represented by 0 or 1;
step 6: introducing a multi-perception strategy, performing fusion judgment on a measured data set N and an actual data set P, and outputting a final set A [ a1, a2, a3, a4], wherein a1 ═ N1 ═ P1, a2 ═ N2 ═ P2, a3 ═ N3 ═ P3, and a4 ═ N4 ═ P4, so as to obtain an actual weather type of the photographed road condition;
and 7: and inputting the real-time image data of the road in the current meteorological environment into the corresponding image enhancement model according to the actual meteorological type, and outputting the enhanced real-time image data.
The working principle and the beneficial effects of the invention are as follows:
1. the invention realizes video monitoring under different adverse meteorological environment scenes based on multi-model image enhancement.
2. In the invention, the multi-perception strategy is introduced, so that the robustness of the system is improved.
3. The invention improves the video quality of road monitoring in bad weather such as rainy days, snowy days, foggy days and the like, and reduces the influence of the bad environmental factors on the image recognition technology.
The present invention will be described in detail with reference to the accompanying drawings.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a diagram of a CNN network architecture of the present invention;
FIG. 3 is a network framework diagram of the weather identification model of the present invention;
FIG. 4 is a flow chart of the rainy image enhancement model of the present invention;
FIG. 5 is a flow chart of the snow day image enhancement model of the present invention.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to specific examples and drawings, but the scope and implementation of the present invention are not limited thereto.
Detailed description of the preferred embodimentsfor specific embodiment 1, as shown in figure 1,
the invention relates to a road video monitoring and improving method in adverse meteorological environment, which can ensure effective monitoring in meteorological environment of rain, snow and fog, and comprises the following steps,
step 1: a multi-model image enhancement unit for constructing a rainy-day image enhancement model, a foggy-day image enhancement model and a snowy-day image enhancement model and combining the three models to serve as image data;
step 2: acquiring original image data of a road in the existing adverse meteorological environment, and inputting the original image data into a multi-model image enhancement unit to obtain enhanced image data;
and step 3: training a meteorological identification model by using the original image data and the enhanced image data as training data;
and 4, step 4: acquiring real-time image data of a road in a current meteorological environment, inputting the real-time image data into a meteorological identification model, and obtaining a measurement set N ═[ N1, N2, N3, N4], wherein N1, N2, N3 and N4 respectively represent rainy day probability, snowy day probability, foggy day probability and other weather probability, and the sum of N1, N2, N3 and N4 is 1;
and 5: acquiring current meteorological data to obtain an actual set P ═ P1, P2, P3, P4], wherein P1, P2, P3 and P4 respectively represent rainy days, snowy days, foggy days and other weather, and P1, P2, P3 and P4 are all represented by 0 or 1;
step 6: introducing a multi-perception strategy, performing fusion judgment on a measured data set N and an actual data set P, and outputting a final set A [ a1, a2, a3, a4], wherein a1 ═ N1 ═ P1, a2 ═ N2 ═ P2, a3 ═ N3 ═ P3, and a4 ═ N4 ═ P4, so as to obtain an actual weather type of the photographed road condition;
and 7: and inputting the video stream of the road in the current meteorological environment into the corresponding image enhancement model according to the actual meteorological type, and outputting the enhanced video stream.
The present invention is directed to determining weather conditions, which can be generally determined in two ways: and (1) acquiring current meteorological data through a meteorological website. (2) And judging through a meteorological identification model. The method combines the two modes, firstly acquires a large amount of rainy, snowy and foggy road meteorological data from open source data on the network and real road scenes, trains a meteorological identification model capable of judging the three bad meteorological weathers by using a convolutional neural network, acquires current meteorological data through a meteorological website, performs fusion judgment of a multi-perception strategy on video stream, and can accurately output the actual meteorological type of the road condition shot by the video. When the weather environment is determined to be bad in rain, snow and fog, the video stream is sent to a corresponding image enhancement model to obtain an enhanced video stream.
In the specific embodiment 2, the method comprises the following steps of,
the invention also includes inputting the enhanced video stream to a road detection module.
The road detection module is a function to be completed by the video detection system, and mainly analyzes traffic images captured by the camera to detect, position, identify and track traffic targets such as vehicles and pedestrians, and analyzes and judges traffic behaviors of the detected, tracked and identified targets, so that calculation and acquisition of various traffic flow data are completed, various adjustments and controls related to traffic management are performed at the same time, and intelligent traffic management is realized.
In order to achieve the effect of improving the road monitoring performance in a bad meteorological environment, the method adopts a vehicle identification algorithm, and the algorithm flow is based on an image identification algorithm of a common fast-rcnn network. Training data are three types of weather road pictures (rainy days, snowy days and foggy days) passing through a corresponding image enhancement model (which is sent to a corresponding network for enhancement through manual classification) and three types of weather road pictures (rainy days, snowy days and foggy days) not passing through a corresponding image enhancement model, and the pictures are signed by a vehicle, so that the ratio of the number of the pictures to the number of the pictures is 3: scale of 1 was subjected to the faster-rcnn model training with 3 for enhanced data and 1 for raw image data (random draw). After training is finished, according to the frame shown in the figure 1, the recognition effect of the road vehicles in three bad weather, namely rainy days, snowy days and foggy days, can be improved.
The module can be flexibly configured according to functions required by actual monitoring, such as traffic targets of vehicles, pedestrians and the like, and the scheme is not unique.
In a specific embodiment of the method of example 3,
the construction of the meteorological identification model is based on a convolutional neural network, the convolutional neural network is a deep neural network with a convolutional structure, the convolutional structure can reduce the memory occupied by the deep network, and three key operations are that the local receptive field is used, the weight sharing is used, and the powing layer is used, so that the parameter number of the network is effectively reduced, and the overfitting problem of the model is relieved. A common network structure is shown in figure 2,
the convolutional neural network mainly comprises five basic constituent units of the convolutional neural network: an input layer, a convolutional layer, a pooling layer, a full-link layer, and an output layer.
Inputting a formula: v ═ conv2(W, X, "valid") + b
Outputting a formula:
Figure BDA0003263753240000041
the above input-output formula is for each convolutional layer, each convolutional layer has a different weight matrix W, and W, X, Y are in matrix form, for the last fully-connected layer, set as the L-th layer, the output is yL in vector form, the expected output is d, then the total error formula is:
Figure BDA0003263753240000042
where conv2(W, X, "valid") is a function of convolution operation, the third parameter valid indicates the type of convolution operation, and the former convolution method is valid. W is the convolution kernel matrix, X is the input matrix, b is the offset,
Figure BDA0003263753240000043
is an activation function. In the total errorIs a vector of the desired output and the net output, respectively.
The CNN is trained through a gradient descent and back propagation algorithm, and the gradient formula of the full connection layer is identical to that of the BP network. The convolution formula for convolutional and pooling layers is as follows:
Figure BDA0003263753240000044
CNN is a feedforward neural network, each neuron is only connected with the neuron in the previous layer, receives the output of the previous layer, outputs the output to the next layer after operation, and has no feedback between layers.
Therefore, the model which can be used for identifying various meteorology can be trained by utilizing various road meteorology picture data with the labels already made by utilizing the convolutional neural network. Four types of weather pictures (positive samples (rainy days, snowy days and foggy days) and negative samples (except the three types of weather pictures) are collected), the ratio of the positive samples to the negative samples is 1:3, the sizes of the pictures are unified into 224x224, and training of a weather identification model is carried out.
In a specific embodiment of the method of example 4,
the specific operation of the multi-sensing strategy fusion decision in the present invention is,
based on a weather category identification model built by vgg16 network, as shown in fig. 3, the result of the network finally subjected to softmax prediction is classified into 4 probabilities, that is, an output measurement set N ═ N1, N2, N3, N4], where N1, N2, N3, and N4 respectively represent rainy day probability, snowy day probability, foggy day probability, and other weather probabilities, and the sum of N1, N2, N3, and N4 is 1;
acquiring real-time meteorological data through a meteorological website to obtain an actual set P ═ P1, P2, P3 and P4], wherein P1, P2, P3 and P4 respectively represent rainy days, snowy days, foggy days and other weather, and P1, P2, P3 and P4 are all represented by 0 or 1;
introducing a multi-perception strategy, performing fusion judgment on a measured data set N and an actual data set P, and outputting a final set AA of [ a1, a2, a3 and a4], wherein a1 is N1 is P1, a2 is N2 is P2, a3 is N3 is P3, and a4 is N4 is P4, so as to obtain an actual weather type of the photographed road condition;
a fusion decision based on a multi-perception strategy can solve two problems.
1. When the weather category identification model based on the convolutional neural network is not identified accurately, auxiliary correction can be performed by means of real-time weather data, namely, the weather is rainy actually, and N is [0.3,0.5,0.1,0.1], but P is [1,0,0,0], and finally the weather enhancement model is entered after a is calculated to be [0.3,0,0,0], and the weather enhancement model is entered.
2. When two or more kinds of weather are mixed in rainy, snowy and foggy days, the weather identification model based on the convolutional neural network can be used for judging which adverse environment has a large influence on the current video stream, and the current video stream is enhanced emphatically, so that the quality of the monitoring video is improved, and the complex weather problem is solved, namely N is [0.3,0.5,0.1,0.1], P is [1,1,0,0], A is [0.3,0.5,0,0], and finally the current video stream enters the snow enhancement model.
And when other weather is identified, directly entering a road detection module.
Embodiment 5. as shown in fig. 4,
the raining image enhancement model in the invention is specifically,
the rain image enhancement model adopts a frame difference method to calculate the rain dynamics and the optical model to identify and process rain by analyzing the rain movement and the optical characteristics.
Basic preconditions of the algorithm are as follows:
1) the gray scale value of the rain noise pixel is greater than the gray scale value of the background pixel.
2) The same position pixel of two continuous frame images is not covered by the same rain noise.
Then extracting three continuous frames of images in the video and judging whether the pixels in the second frame of image are polluted by rain noise, wherein I is the ratio of I to Deltan-In-1=In-In+1And judging whether the gray value of the pixel point of the continuous three frames of images in the video is larger than or equal to C, wherein I is the gray value of the pixel point of the continuous three frames of images in the video, and C is a gray value difference judgment threshold. After the condition is met, the pixel points really polluted by rain noise are further calculated. The specific idea is that the gray value difference of the pixel caused by the motion track of rain noiseThe property that the differential Δ I is linearly related to the background gray value Ibg contaminated by rain noise is estimated.
Satisfies the following conditions: Δ I ═ β In-1+ α, where α, β are constants. And finally, replacing the corresponding pixel gray value in the frame image by the average value of the pixel gray values of the corresponding positions of the frame images before and after the frame image in the rain and snow noise.
Embodiment 6, as shown in figure 5,
the snow-day image enhancement model in the present invention is specifically,
the snow-day image enhancement model adopts a k-means clustering method, namely clustering the gray value of any coordinate pixel in the video by using the characteristic of a snow degraded image. The algorithm firstly extracts the gray value of a coordinate pixel point in a video in all frames. And adopting k-means clustering to the gray values of the same pixel points. Two original clustering center points W are selected at the beginning of k-means clusteringrAnd WbIn order to increase the clustering speed, the maximum value and the minimum value in the pixel gray value can be selected. Then respectively calculating the gray value I of the rest pixel pointspTo WrAnd WbEuclidean distance of (a):
Figure BDA0003263753240000061
when d (I)p-Wr)<d(Ip-Wb) Time gray value IpSnow noise class, and vice versa background class. The above operation is used as one-time clustering, and the clustering center point needs to be updated after each time of clustering is completed, and the method comprises the following steps:
Figure BDA0003263753240000062
in the formula WnAfter n clustering by WnIs a set of gray values at the center of the cluster,
Figure BDA0003263753240000063
is WnNumber of middle element, Wn+1Is the updated cluster center; clustering stops when the gray values of the two cluster centers are stable. And then taking the average value of the gray values of the pixels with the gray values smaller than the final clustering center as the background gray. However, the device is not suitable for use in a kitchenAnd replacing the gray value of the pixel with the background gray value, wherein the gray value is higher than that of the pixel in the final clustering center. And finally, processing the rest coordinate pixel points through the same operation to finish snow removal.
In a specific embodiment of the process of example 7,
the foggy day image enhancement model utilizes an image enhancement algorithm based on dark channel prior defogging. Dark channels mean that in most non-sky local areas, some pixels will always have at least one color channel with a very low value. In other words, the minimum value of the light intensity of the region is a very small number. We give a mathematical definition of the dark channel, which for an arbitrary input image J can be expressed by:
Figure BDA0003263753240000064
in the formula, JcRepresenting each channel of the color image and omega (x) represents a window centered on pixel x.
According to the formula, firstly, the minimum value in each pixel RGB component is calculated, the minimum value is stored in a gray-scale image with the same size as the original image, then the minimum value filtering is carried out on the gray-scale image, the Radius of the filtering is determined by the size of a window, and generally, WindowSize is 2 × Radius + 1;
the theory of dark channel priors states that: j. the design is a squaredark→0
In real life, there are three major phonemes responsible for low channel values in the dark primaries: a) shadows of glass windows in automobiles, buildings, and cities, or projections of natural landscapes such as leaves, trees, and rocks; b) brightly colored objects or surfaces, some of which have very low values in the three channels of RGB (e.g. green grass/trees/plants, red or yellow flowers/leaves, or blue water); c) darker objects or surfaces, such as dark trunks and stones. In general, natural scenes are shaded or colored everywhere, and the dark primaries of these scenes are always dark gray.
The specific theoretical formula is derived as follows:
first, in computer vision and computer graphics, a fog map forming model described by the following equation is widely used: i (x) ═ j (x) t (x) + a (1-t (x)), where i (x) is the image we have now (the image to be defogged), j (x) is the image we want to recover, a is the global atmospheric light component, and t (x) is the transmittance. The now known condition is i (x) and requires the target j (x), which is obviously an equation with numerous solutions and therefore needs some a priori.
The above formula was further processed slightly to transform it into the following formula:
Figure BDA0003263753240000071
wherein the superscript C indicates the meaning of the three R/G/B channels.
First, let the transmittance t (x) in each window be constant, defined as
Figure BDA0003263753240000077
And the value A is given, and then two minimum values are calculated on two sides of the above formula to obtain the following formula:
Figure BDA0003263753240000072
wherein J is the image to be solved without fog, and there are:
Figure BDA0003263753240000073
thus, it can be deduced that:
Figure BDA0003263753240000074
obtaining:
Figure BDA0003263753240000075
this is the transmittance
Figure BDA0003263753240000076
An estimate of (2).
In real life, even in a fine day, there are some particles in the air, so that the influence of fog can be felt when looking away from objects, and in addition, the existence of fog can be felt by human beings, so that it is necessary to retain a certain degree of fog during defogging, which can be realized by introducing a value in [ 0] into the above formula,1]The factor in between, then the above equation is modified to:
Figure BDA0003263753240000081
where ω is 0.95.
The above reasoning assumes that the global arrival a value is known, and in practice we can derive this value from the foggy image by means of a dark channel map. The method comprises the following specific steps:
(1) the first 0.1% of the pixels are taken from the dark channel map according to the magnitude of the luminance.
(2) In these positions, the value of the corresponding point with the highest luminance is found in the original foggy image I as the a value.
Next, the fog-free image is restored: the formula J is (I-a)/t + a.
Experiments show that after the actual foggy road video stream is processed by a dark channel method, the video stream is darkened integrally, so that image secondary enhancement is performed by adopting histogram equalization. Histogram equalization is a commonly used image enhancement technique. Assume that there is one primary image. Its histogram will then be tilted towards the lower end of the grey scale and all image details are compressed to the dark end of the histogram. If the gray scale of the dark end can be "stretched" to produce a more evenly distributed histogram, the image will be much sharper.
The algorithm comprises the following steps:
(1) calculating the histogram distribution of the original image;
(2) calculating the cumulative probability distribution of the histogram of the original image;
(3) mapping, the formula can be expressed as:
Figure BDA0003263753240000082
wherein A is the original image, H is the histogram, L is the gray level, A0The number of pixels.
And completing defogging enhancement on the target video stream after the dark channel prior defogging and histogram equalization algorithm.

Claims (7)

1. A road video monitoring and improving method under adverse meteorological environment is characterized by comprising the following steps,
step 1: a multi-model image enhancement unit for constructing a rainy-day image enhancement model, a foggy-day image enhancement model and a snowy-day image enhancement model and combining the three models to serve as image data;
step 2: acquiring original image data of a road in the existing adverse meteorological environment, and inputting the original image data into a multi-model image enhancement unit to obtain enhanced image data;
and step 3: training a meteorological identification model by using the original image data and the enhanced image data as training data;
and 4, step 4: acquiring real-time image data of a road in a current meteorological environment, inputting the real-time image data into a meteorological identification model, and obtaining a measurement set N ═[ N1, N2, N3, N4], wherein N1, N2, N3 and N4 respectively represent rainy day probability, snowy day probability, foggy day probability and other weather probability, and the sum of N1, N2, N3 and N4 is 1;
and 5: acquiring current meteorological data to obtain an actual set P ═ P1, P2, P3, P4], wherein P1, P2, P3 and P4 respectively represent rainy days, snowy days, foggy days and other weather, and P1, P2, P3 and P4 are all represented by 0 or 1;
step 6: introducing a multi-perception strategy, performing fusion judgment on a measured data set N and an actual data set P, and outputting a final set A [ a1, a2, a3, a4], wherein a1 ═ N1 ═ P1, a2 ═ N2 ═ P2, a3 ═ N3 ═ P3, and a4 ═ N4 ═ P4, so as to obtain an actual weather type of the photographed road condition;
and 7: and inputting the real-time image data of the road in the current meteorological environment into the corresponding image enhancement model according to the actual meteorological type, and outputting the enhanced real-time image data.
2. The method for monitoring and improving the road video under the adverse meteorological environment according to claim 1, wherein the step 3 is specifically,
step 301: constructing a meteorological identification model based on a convolutional neural network;
step 302: acquiring image data in four types of meteorological environments, namely rainy days, snowy days, foggy days and other weather, wherein the image data in the rainy days, the snowy days and the foggy days are used as positive samples, the image data in the other weather is used as negative samples, and the ratio of the positive samples to the negative samples is 1: 3;
step 303: making labels for all image data, and inputting the labels into a meteorological identification model for training;
step 304: and obtaining the trained meteorological identification model.
3. The method for monitoring and improving the road video under the adverse meteorological environment according to claim 1, wherein the step 4 is specifically,
step 401: acquiring real-time image data with the size of 224x224x3, and inputting the real-time image data into a meteorological identification model;
step 402: performing two convolutions by 64 convolution kernels of 3x3 + ReLU, and changing the size to 224x224x 64;
step 403: for maximum pooling of size 2x2, the size becomes 112x112x 64;
step 404: performing two convolutions by 128 convolution kernels of 3x3 + ReLU, and the size becomes 112x112x 128;
step 405: for maximum pooling of size 2x2, the size becomes 56x56x 128;
step 406: performing triple convolution by 256 convolution kernels of 3x3 + ReLU, and changing the size to 56x56x 256;
step 407: for maximum pooling of size 2x2, the size becomes 28x28x 256;
step 408: performing triple convolution by 512 convolution kernels of 3x3 + ReLU, and changing the size to 28x28x 512;
step 409: for maximum pooling of size 2x2, the size becomes 14x14x 512;
step 410: performing triple convolution by 512 convolution kernels of 3x3 + ReLU, and changing the size to 14x14x 512;
step 411: for maximum pooling of size 2x2, the size becomes 7x7x 512;
step 412: fully connecting with two layers of 1x1x4096, one layer of 1x1x50 and one layer of 1x1x4 + ReLU;
step 413: the measurement set N, N ═ N1, N2, N3, N4 was output by softmax.
4. The method for monitoring and improving the road video under the adverse meteorological environment according to claim 1, characterized by further comprising the step 8: the enhanced real-time image data is input to a road detection module.
5. The method for improving road video surveillance in adverse meteorological environment according to claim 1, wherein the rainy day image enhancement model comprises,
step A1: acquiring any one frame of image in a video stream of a road under a current meteorological environment as a first frame of image;
step A2: judging whether the first frame image meets the background brightness linear constraint, if so, performing the step A3, and if not, repeating the step A1;
step A3: extracting continuous three frames of images from the first frame of image, and judging coordinate pixel points of the second frame of image polluted by rain noise;
step A4: replacing the gray value of the corresponding pixel in the frame image by the average value of the gray values of the pixels at the corresponding positions of the front frame image and the rear frame image of the frame image;
step A5: and finishing the image enhancement processing in the rainy day.
6. The method for monitoring and improving the road video in the adverse meteorological environment according to claim 1, wherein the snow-day image enhancement model comprises,
step B1: acquiring gray values of a coordinate pixel point in a video stream of a road in a current meteorological environment in all frames;
step B2: selecting the maximum value and the minimum value in the gray value of the pixel point as an original clustering central point WrAnd WbPerforming k-means clustering on the gray values in all the frames;
step B3: the cluster center point needs to be updated after each clustering is completed,
Figure FDA0003263753230000031
in the formula WnNth degree clusteringAfter by WnIs a set of gray values at the center of the cluster,
Figure FDA0003263753230000032
is WnNumber of middle element, Wn+1Is the updated cluster center;
step B4: stopping clustering until the gray values of the two clustering centers are stable, otherwise, repeatedly clustering;
step B5: taking the gray value average value of the pixels with the gray values smaller than the final clustering center as background gray, and replacing the gray value of the pixels with the gray values higher than the final clustering center with the background gray;
step B6: repeating the steps B2-B5 for all the pixel points;
step B7: the snow image enhancement process ends.
7. The method for improving road video surveillance in adverse meteorological environment according to claim 1, wherein the fog day image enhancement model comprises,
step C1: processing the actual foggy day road video stream by a dark channel method;
step C2: carrying out secondary image enhancement by using histogram equalization;
step C3: and finishing the foggy day image enhancement processing.
CN202111080366.0A 2021-09-15 2021-09-15 Road video monitoring and lifting method under bad weather environment Active CN113792793B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111080366.0A CN113792793B (en) 2021-09-15 2021-09-15 Road video monitoring and lifting method under bad weather environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111080366.0A CN113792793B (en) 2021-09-15 2021-09-15 Road video monitoring and lifting method under bad weather environment

Publications (2)

Publication Number Publication Date
CN113792793A true CN113792793A (en) 2021-12-14
CN113792793B CN113792793B (en) 2024-01-23

Family

ID=78878426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111080366.0A Active CN113792793B (en) 2021-09-15 2021-09-15 Road video monitoring and lifting method under bad weather environment

Country Status (1)

Country Link
CN (1) CN113792793B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346538A (en) * 2014-11-26 2015-02-11 中国测绘科学研究院 Earthquake hazard evaluation method based on control of three disaster factors
KR20150081906A (en) * 2014-01-07 2015-07-15 한국도로공사 Taking a photograph system when bumped by car and method for controlling thereof
CN107749067A (en) * 2017-09-13 2018-03-02 华侨大学 Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks
CN112308799A (en) * 2020-11-05 2021-02-02 山东交通学院 Offshore road complex environment visibility optimization screen display method based on multiple sensors
CN112330558A (en) * 2020-11-05 2021-02-05 山东交通学院 Road image recovery early warning system and method based on foggy weather environment perception

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150081906A (en) * 2014-01-07 2015-07-15 한국도로공사 Taking a photograph system when bumped by car and method for controlling thereof
CN104346538A (en) * 2014-11-26 2015-02-11 中国测绘科学研究院 Earthquake hazard evaluation method based on control of three disaster factors
CN107749067A (en) * 2017-09-13 2018-03-02 华侨大学 Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks
CN112308799A (en) * 2020-11-05 2021-02-02 山东交通学院 Offshore road complex environment visibility optimization screen display method based on multiple sensors
CN112330558A (en) * 2020-11-05 2021-02-05 山东交通学院 Road image recovery early warning system and method based on foggy weather environment perception

Also Published As

Publication number Publication date
CN113792793B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
CN109740465B (en) Lane line detection algorithm based on example segmentation neural network framework
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN110263706B (en) Method for detecting and identifying dynamic target of vehicle-mounted video in haze weather
CN108615226B (en) Image defogging method based on generation type countermeasure network
CN110555465B (en) Weather image identification method based on CNN and multi-feature fusion
CN104134068B (en) Monitoring vehicle characteristics based on sparse coding represent and sorting technique
CN110310241B (en) Method for defogging traffic image with large air-light value by fusing depth region segmentation
CN109410129A (en) A kind of method of low light image scene understanding
CN110796009A (en) Method and system for detecting marine vessel based on multi-scale convolution neural network model
CN110706239B (en) Scene segmentation method fusing full convolution neural network and improved ASPP module
CN110717921B (en) Full convolution neural network semantic segmentation method of improved coding and decoding structure
CN113627228A (en) Lane line detection method based on key point regression and multi-scale feature fusion
He et al. A feature fusion method to improve the driving obstacle detection under foggy weather
CN110807384A (en) Small target detection method and system under low visibility
CN114693924A (en) Road scene semantic segmentation method based on multi-model fusion
CN113033687A (en) Target detection and identification method under rain and snow weather condition
CN110889360A (en) Crowd counting method and system based on switching convolutional network
CN114627269A (en) Virtual reality security protection monitoring platform based on degree of depth learning target detection
CN112164010A (en) Multi-scale fusion convolution neural network image defogging method
CN115019340A (en) Night pedestrian detection algorithm based on deep learning
CN113326846B (en) Rapid bridge apparent disease detection method based on machine vision
CN113158747A (en) Night snapshot identification method for black smoke vehicle
CN111160282B (en) Traffic light detection method based on binary Yolov3 network
CN113792793B (en) Road video monitoring and lifting method under bad weather environment
CN116385293A (en) Foggy-day self-adaptive target detection method based on convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant