CN109345479B - Real-time preprocessing method and storage medium for video monitoring data - Google Patents

Real-time preprocessing method and storage medium for video monitoring data Download PDF

Info

Publication number
CN109345479B
CN109345479B CN201811136740.2A CN201811136740A CN109345479B CN 109345479 B CN109345479 B CN 109345479B CN 201811136740 A CN201811136740 A CN 201811136740A CN 109345479 B CN109345479 B CN 109345479B
Authority
CN
China
Prior art keywords
pixel
filter
video
image
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811136740.2A
Other languages
Chinese (zh)
Other versions
CN109345479A (en
Inventor
高原原
刘灵芝
白立飞
温秀秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC Information Science Research Institute
Original Assignee
CETC Information Science Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC Information Science Research Institute filed Critical CETC Information Science Research Institute
Priority to CN201811136740.2A priority Critical patent/CN109345479B/en
Publication of CN109345479A publication Critical patent/CN109345479A/en
Application granted granted Critical
Publication of CN109345479B publication Critical patent/CN109345479B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Abstract

A real-time preprocessing method and storage medium of video monitoring data, the method includes: obtaining a minimum channel image, and calculating down-sampled dark primary color DdownAfter the video image is converted into a gray image, sorting the video image in a descending order according to the gray value of the pixels, selecting high-brightness pixels with a certain proportion, and taking the average value as a global atmospheric light estimation A; for down-sampled dark primaries DdownCarrying out filtering processing for improving local smaller value to obtain optimized down-sampling dark primary Dfilter(ii) a To DfilterAnd performing upsampling to obtain scattered light estimation of each position of the video, and performing inverse solution on a t-th frame of fog-free picture by using the scattered light estimation S and the global atmospheric light estimation A according to a preset fog-day imaging model. The invention obtains the down-sampling dark primary color, reduces the operation amount, improves the operation time and compares D with DfilterThe scattered light is estimated by block upsampling, and the fog-free picture can be reversely solved by blocks, so that the requirement on the memory space is reduced, and the method is more suitable for the condition of uneven equipment in the environment of the Internet of things.

Description

Real-time preprocessing method and storage medium for video monitoring data
Technical Field
The application relates to a video data processing method, in particular to a real-time preprocessing method for video monitoring data such as the Internet of things.
Background
The monitoring of the internet of things is a common security and precaution means, and a sensor network is utilized to carry out linkage reaction on visual, accurate and timely video information content. However, current monitoring systems are very sensitive to weather conditions. Particularly, under the foggy weather condition with low visibility, the observation target is fuzzy, and the color fidelity is obviously reduced. Therefore, in order to improve the adaptability of the monitoring system of the internet of things to severe weather, the fog-day video of the internet of things needs to be preprocessed, so that the purpose of improving the visual effect is achieved.
At present, the existing video defogging methods can be roughly classified into two categories according to whether a foggy day imaging model is utilized or not: a video enhancement method based on a non-imaging model and a video recovery method based on an imaging model.
Typical and common video enhancement methods based on non-imaging models include histogram equalization, wavelet transform, Retinex, etc. The histogram equalization method is simple to implement and high in running speed, but the pixel gray value is easy to reduce, and the processed video can have the loss of detail information. The wavelet method and the Retinex method are used for enhancing the non-low-frequency sub-blocks of the video, so that the details are more prominent. However, they do not consider the physical reason of video degradation, and therefore have limitations, and only improve the visual effect of the video to a certain extent, and cannot achieve true defogging.
The video recovery based on the imaging model is substantially based on the atmospheric scattering model, the relevant parameters in the model are solved, and the fog-free clear video of the scene is obtained, so that the picture quality is improved. Such methods mainly comprise: a method based on polarization properties, a method based on depth information and a method based on a priori knowledge. The polarization-based method utilizes the polaroid to acquire two or more images with different polarization degrees of the same scene, needs hardware support, and is severely limited in popularization and application. The depth information-based method is to acquire depth information of a scene, thereby estimating a three-dimensional structure of the scene to restore a clear video. The method requires a certain degree of user interaction operation and cannot achieve automatic processing. The method based on the priori knowledge utilizes the rules of local statistics or hypothesis to estimate variables, and then solves the fog-free video. The method has physical effectiveness, is simple to implement, and can obtain ideal recovery results. In practical application, the foggy weather video recovery method based on the priori knowledge is widely applied. Among the foggy day video recovery methods based on prior knowledge, the algorithm based on dark channel prior knowledge proposed by wakemin et al is recognized as the algorithm with the best defogging effect of the current video. The dark channel prior principle shows that: in an outdoor image under clear weather, at least one color channel has extremely low brightness approaching 0 in a non-sky area. According to the dark channel prior principle, the minimum value of the local area of the foggy day image can be used as the estimation of the scattering light variable. Then, a fog-free picture can be obtained according to the imaging model. However, in the using process of the method, the scattered light obtained by utilizing minimum filtering has a block effect, so that the refinement of the scattered light by using a soft matting algorithm is proposed by the method of Hommin, the time and space complexity is high, and the real-time processing of the foggy day video is difficult to realize.
To improve the efficiency of the algorithm, many researchers have proposed replacing the soft matting algorithm with fast filtering. Median filtering, guided filtering and fast bilateral filtering have all been used to optimize scattered light estimation. However, the median filtering does not have the characteristic of edge preservation, can not completely remove the blocking effect, and has halo after the video is defogged. Although the guiding filtering and the fast bilateral filtering have good edge-preserving characteristics and high running speed, the guiding filtering and the fast bilateral filtering use space to exchange time, the space consumption ratio is high, and the environment with uneven hardware configuration of the Internet of things equipment is difficult to meet.
Therefore, the video preprocessing method which is high in speed, small in space consumption and strong in real-time performance is provided, real-time defogging of videos can be achieved, the video preprocessing method is convenient to use in the environment of the internet of things, and the technical problem which needs to be solved in the prior art is formed.
Disclosure of Invention
The invention aims to provide a real-time preprocessing method and a storage medium of video monitoring data, which can realize real-time defogging of videos and make the videos become clear. The method is high in implementation speed, small in occupied space and convenient to use in the environment of the Internet of things.
In order to achieve the purpose, the invention adopts the following technical scheme:
a real-time preprocessing method for video monitoring data comprises the following steps:
a minimum channel obtaining step S110, buffering 1 frame of the input video sequence, representing as the t-th frame, comparing R, G, B three channels of each pixel of the frame, taking the minimum value, and obtaining a minimum channel image Imin
Dark primary color calculation step S120, minimum channel map I for t frameminDividing non-overlapped sub-blocks, taking the minimum value in each sub-block, and calculating the minimum channel map IminDown-sampled dark primary D ofdown
A global atmospheric light estimation step S130, after the t frame of fog image is converted into a gray image, the fog image is sorted in a descending order according to the gray value of the pixels, high-brightness pixels with a certain proportion are selected, and the average value is used as a global atmospheric light estimation A;
a local small value boosting step S140 for said down-sampled dark primary DdownGradually carrying out filtering processing for improving local small value by each pixel to obtain optimized down-sampling dark primary Dfilter
Inverse solution step S150 of downsampling the optimized dark primary DfilterAnd performing upsampling to obtain scattered light estimation S (x, y) of each position of the t frame, and reversely solving a fog-free picture of the t frame by using the scattered light estimation S and the global atmosphere light estimation A according to a preset fog day imaging model.
Optionally, in step S110, Imin(x,y)=min(IR(x,y),IG(x,y),IB(x,y)),
Where (x, y) is the coordinate location of the pixel, R, G, B represents the three color channels, and I is the buffered video frame.
Optionally, in step S120, the radius N of the non-overlapping sub-block is set to about 1/40 of the minimum value of the width and height of the video frame;
and when the width or height of the video frame is not integral multiple of the non-overlapped sub-blocks, the video frame is filled in a mirror image mode to carry out edge expansion processing on the video.
Optionally, in step S130, an average of the gray values of the first 0.01% pixels is selected as the global atmospheric light estimate a.
Optionally, in step S130, the processed images are all defaulted to non-color-cast images or have undergone white balance processing.
Optionally, step S140 specifically includes:
and taking each pixel to be processed as a central pixel, defining the size of a processing window, setting the weight w of the gray value in the processing window which is greater than or equal to the gray value of the central pixel as 1, and setting the weight w of the gray value in the window which is less than the gray value of the central pixel as w < 1. The filter equation is expressed as follows:
Figure BDA0001814863130000041
where (x, y) is the coordinates of the filtered pixel, Ω is a window with (x, y) as the center pixel, (i, j) is the pixel within the window, and w (i, j) is weighted as:
Figure BDA0001814863130000042
optionally, in step S150, for DfilterBlock up-sampling, block obtaining scattered light estimation, and block inverse solution of fog-free picture.
Optionally, in step S150, the specific implementation steps of upsampling are:
source image DfilterThe size of the target image is m multiplied by n, the target image is a multiplied by b, the (p, q) th pixel point of the target image, namely the (p) th row and the q th column of the p row can be corresponding to the source image through a side length ratio, the corresponding coordinates are (p multiplied by m/a, q multiplied by n/b), and the corresponding floating point coordinates are (k + u, l + v), wherein k and l are integer parts of the floating point coordinates, u and v are decimal parts of the floating point coordinates, and a floating point number between [0 and 1) is taken, so that the value S (p and q) of the point can be represented by the source image DfilterThe middle coordinates are determined by the four pixel values corresponding to (k, l), (k +1, l), (k, l +1), (k +1, l +1), i.e.: s (p, q) ═ 1-u × (1-v) × Dfilter(i,j)+(1-u)×v×Dfilter(i,j+1)+u×(1-v)×Dfilter(i+1,j)+u×v×Dfilter(i+1,j+1)。
Optionally, in step S150, the inverse solution fog-free frame is:
Figure BDA0001814863130000043
where J is an image without fog, a (1-t (x, y)) ═ S (x, y), t is the transmittance, a is atmospheric light at infinity, and a (1-t (x, y)) is the scattered light component per pixel.
The present invention also discloses a storage medium that can be used to store computer-executable instructions that, when executed by a processor, perform the aforementioned accelerated real-time pre-processing method of video surveillance data.
The invention has the following advantages:
(1) in step S120, the present invention obtains a downsampled dark primary instead of obtaining a large dark primary as a video by sliding a window point by point, thereby reducing the amount of computation and increasing the operation time when performing the filtering process of step S140.
(2) In step S150, the invention is directed to DfilterThe scattered light is estimated by block upsampling, so that a fog-free picture is solved by block inverse solution, the requirement on the memory space is reduced, and the method is more suitable for the condition of uneven equipment configuration in the environment of the Internet of things.
Drawings
FIG. 1 is a flow diagram of a method for real-time pre-processing of video surveillance data in accordance with an embodiment of the present invention;
fig. 2 is a schematic diagram of computing down-sampled dark primaries in a dark primary acquisition step, in accordance with a specific embodiment of the present invention;
fig. 3 shows the defogging effect according to the embodiment of the invention, wherein fig. 3(a) is a fog day image, and fig. 3(b) is a defogged video image.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
In order to solve the problem of atomization effect in video monitoring, in particular to improve the video quality in fog days, the invention mainly comprises the following steps: firstly, dividing each frame into non-overlapping sub-blocks, and solving the down-sampled dark primary colors; then, each pixel in the down-sampled dark primary color is subjected to filtering processing for improving a local small value; and then, performing up-sampling on the optimized down-sampling dark primary color to obtain the estimation of scattered light, and performing picture recovery of the video frame by using a single-scattering imaging model. The invention realizes the defogging of the video, and the method has high calculation efficiency, small occupied space and high fidelity of the obtained video picture.
Specifically, referring to fig. 1, specific steps of a method for real-time preprocessing of video monitoring data according to a specific embodiment of the present invention are shown:
a minimum channel obtaining step S110, buffering 1 frame of the input video sequence, representing as the t-th frame, comparing R, G, B three channels of each pixel of the frame, taking the minimum value, and obtaining a minimum channel image Imin,
Wherein, Imin(x,y)=min(IR(x,y),IG(x,y),IB(x,y))。
Where (x, y) is the coordinate location of the pixel, R, G, B represents the three color channels, and I is the buffered video frame.
For example, if (r.g.b) in a pixel is (120.250.168), then 120 of red is taken as the representative of the pixel, and the operation is performed for each pixel, thereby obtaining the minimum channel map Imin
Each frame of video is a three-dimensional matrix formed by R, G, B three channels, each channel being a two-dimensional matrix. The invention firstly calculates the minimum channel map I of the imagemin
Dark primary color calculation step S120, minimum channel map I for t frameminDividing non-overlapped sub-blocks, taking the minimum value in each sub-block, and calculating the minimum channel map IminDown-sampled dark primary D ofdown
For the dark primaries, most images, in non-sky areas, there is always at least one pixel in any local patch, whose pixel value of one or several channels is very low and approaches zero. The dark primary color rule exists mainly because there are usually local shadows, black objects or bright colors on the surface in the life scene picture. Even in scenes such as green grass and safflower, the blue channel is low. However, in the image disturbed by the foggy weather, the dark primary color is rendered into a gray white color by the white light in the atmosphere, and the intensity is higher. Therefore, the dark primary color of the foggy day video frame can be used as the scattered light part participating in imaging.
Referring to fig. 2, a schematic diagram of a minimum channel division calculation of downsampled dark primaries is shown. In order to improve the calculation efficiency and reduce the occupied space, the method is different from the method for calculating the dark primary color by pixel sliding windows.
The radius N of the non-overlapping sub-block is generally related to the size of the video frame picture and in one particular embodiment is generally set to about 1/40 of the minimum of the width and height of the video frame. In addition, when the width or height of the video frame is not integral multiple of the non-overlapped sub-blocks, the video frame needs to be subjected to edge expansion, and the video frame is filled in a mirror image mode.
For steps S110 and S120, the video image to be processed may be directly divided into non-overlapping sub-blocks, and taking the minimum value in each sub-block is completed. But not as efficiently as step S110 and step S120.
And a global atmospheric light estimation step S130, after the t-th frame of foggy day image is converted into a gray image, the gray image is sorted in a descending order according to the gray value of the pixels, high-brightness pixels with a certain proportion are selected, and the average value is used as global atmospheric light estimation A.
The maximum gray value in a foggy day image is generally considered to be an approximation of the atmospheric light a. In order to avoid the influence of noise, after the foggy weather image is converted into the gray map, the images are sorted in a descending order according to the gray values of the pixels, and in an optional embodiment, the average value of the gray values of the first 0.01% of the pixels is selected as the estimation of the global atmospheric light A.
In one embodiment of the invention, the fog images that are processed all default to non-color cast images or have been white balanced.
A local small value boosting step S140 for said down-sampled dark primary DdownGradually carrying out filtering processing for improving local small value by each pixel to obtain optimized down-sampling dark primary Dfilter
For the down-sampled dark primaries obtained in step S120, fog can be eliminated by theoretically performing direct up-sampling processing, and then the fog is combined with an imaging model, so that a vivid fog-free image with color can be restored. However, this method assumes that the local scattered light has a constant characteristic, and obtaining the scattered light estimate by taking the local minimum value will result in a rough scattered light estimate with a significant blocking effect, and this rough estimation method cannot guarantee that the scattered light conforms to the geometric edge of the original depth of field in the region where the depth of field jumps.
While the change of the depth of field causes the fog concentration to change dramatically, the rough scattered light will generate a false edge because the dark primary color theory uses the minimum value of the local area as the rough estimation of the scattered light. The essence of this edge generation is due to: at the edge of the far-near view boundary, the scattered light value estimate is smaller in the region with larger depth of field. Therefore, the invention adopts the filtering processing of correcting the smaller value of the local area to obtain DfilterThereby down-sampling the dark primary DdownIs raised to some extent.
Specifically, the promotion of the local small value is similar to mean filtering, specifically, each pixel to be processed is taken as a central pixel, the size of a processing window is defined, the weight w of the gray value in the processing window, which is greater than or equal to the gray value of the central pixel, is set to be 1, and the weight w of the gray value in the window, which is smaller than the gray value of the central pixel, is set to be less than 1. The filter equation is expressed as follows:
Figure BDA0001814863130000081
where (x, y) is the coordinates of the filtered pixel, Ω is a window with (x, y) as the center pixel, (i, j) is the pixel within the window, and w (i, j) is weighted as:
Figure BDA0001814863130000082
in this step, the filter process is typically done starting from the top left corner point of the downsampled dark primaries, sliding down and to the right until all pixels are finished. For the points of the edge, the window to be processed can be defined in a mirror image expansion mode.
In the present invention, the size of the window is generally selected to be an odd number of pixels, for example, 7 × 7 or 9 × 9 pixels.
Thus, this filtering approximates mean filtering when the depth of field does not vary drastically. When the depth of field changes dramatically, the filtering can enhance the underestimated scattered light value properly. Wherein the smaller w (i, j) is, the stronger the lifting capacity is.
In this step, the filtering process that suppresses locally small values may be replaced by all edge preserving filtering, a pilot filter, or a joint filter. But the latter filtering method is more complex.
Inverse solution step S150 of downsampling the optimized dark primary DfilterAnd performing upsampling to obtain scattered light estimation S (x, y) of each position of the t frame, and reversely solving a fog-free picture of the t frame by using the scattered light estimation S and the global atmosphere light estimation A according to a preset fog day imaging model.
Theoretically, for DfilterAnd performing upsampling processing to form a dark primary color as large as each frame as an estimation S of scattered light, and combining the estimated light with global atmospheric light A to reversely solve a fog-free picture of the t-th frame according to a fog-day imaging model. However, it is difficult to implement a device with a small memory processing space, for example, a hardware device running based on a DSP. Therefore, the present invention is preferably directed to DfilterBlock up-sampling, block obtaining scattered light estimation, and block inverse solution of fog-free picture.
In this step, the specific implementation steps of upsampling are as follows:
source image DfilterThe size of the target image is m multiplied by n, the target image is a multiplied by b, the (p, q) th pixel point of the target image, namely the (p) th row and the q th column of the p row can be corresponding to the source image through a side length ratio, the corresponding coordinates are (p multiplied by m/a, q multiplied by n/b), and the corresponding floating point coordinates are (k + u, l + v), wherein k and l are integer parts of the floating point coordinates, u and v are decimal parts of the floating point coordinates, and a floating point number between [0 and 1) is taken, so that the value S (p and q) of the point can be represented by the source image DfilterThe middle coordinate is (k, l), (k +1, l), (k, l +1), (k +1, l +1)The corresponding four pixel values are determined, namely: s (p, q) ═ 1-u × (1-v) × Dfilter(i,j)+(1-u)×v×Dfilter(i,j+1)+u×(1-v)×Dfilter(i+1,j)+u×v×Dfilter(i+1,j+1)。
The common foggy day model is:
I(x,y)=J(x,y)t(x,y)+A(1-t(x,y))
where (x, y) is the coordinate position of the pixel, J is the image without fog, t is the transmittance, a is atmospheric light at infinity, and a (1-t (x, y)) is the scattered light component for each pixel.
Where a (1-t (x, y)) ═ S (x, y), then:
Figure BDA0001814863130000091
the present invention also discloses a storage medium that can be used to store computer-executable instructions that, when executed by a processor, perform the aforementioned accelerated real-time pre-processing method of video surveillance data.
Example 1:
referring to fig. 3, a defogging effect diagram according to an embodiment of the present invention is shown, where fig. 3(a) is a foggy day image, and fig. 3(b) is a video image after defogging, it can be seen that the present invention implements defogging, and has low computation load and low requirement for memory space, and is more suitable for the situation of uneven equipment configuration in the environment of the internet of things.
The invention has the following advantages:
(1) in step S120, the present invention obtains a downsampled dark primary instead of obtaining a large dark primary as a video by sliding a window point by point, thereby reducing the amount of computation and increasing the operation time when performing the filtering process of step S140.
(2) In step S150, the invention is directed to DfilterThe block upsampling is used as scattered light estimation, so that a fog-free picture is solved in a block inverse mode, the requirement for a memory space is reduced, and the method is more suitable for the condition of uneven equipment configuration in the environment of the Internet of things.
It will be apparent to those skilled in the art that the various elements or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device, or alternatively, they may be implemented using program code that is executable by a computing device, such that they may be stored in a memory device and executed by a computing device, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
While the invention has been described in further detail with reference to specific preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A real-time preprocessing method for video monitoring data comprises the following steps:
a minimum channel obtaining step S110, buffering 1 frame of the input video sequence, representing as the t-th frame, comparing R, G, B three channels of each pixel of the frame, taking the minimum value, and obtaining a minimum channel image Imin
Dark primary color calculation step S120, minimum channel map I for t frameminDividing non-overlapped sub-blocks, taking the minimum value in each sub-block, and calculating the minimum channel map IminDown-sampled dark primary D ofdown
A global atmospheric light estimation step S130, after the t frame of fog image is converted into a gray image, the fog image is sorted in a descending order according to the gray value of the pixels, high-brightness pixels with a certain proportion are selected, and the average value is used as a global atmospheric light estimation A;
a local small value boosting step S140 for said down-sampled dark primary DdownEach pixel in (a) is gradually subjected to a filtering process for raising a locally small value,obtaining an optimized down-sampled dark primary Dfilter
Inverse solution step S150 of downsampling the optimized dark primary DfilterAnd performing upsampling to obtain scattered light estimation S (x, y) of each position of the t frame, and reversely solving a fog-free picture of the t frame by using the scattered light estimation S and the global atmosphere light estimation A according to a preset fog day imaging model.
2. The real-time preprocessing method of claim 1, wherein:
in step S110, Imin(x,y)=min(IR(x,y),IG(x,y),IB(x,y)),
Where (x, y) is the coordinate location of the pixel, R, G, B represents the three color channels, and I is the buffered video frame.
3. The real-time preprocessing method of claim 1, wherein:
in step S120, the radius N of the non-overlapping sub-block is set to about 1/40 of the minimum value of the width and height of the video frame;
and when the width or height of the video frame is not integral multiple of the non-overlapped sub-blocks, the video frame is filled in a mirror image mode to carry out edge expansion processing on the video.
4. The real-time preprocessing method of claim 1, wherein:
in step S130, the average of the gray values of the first 0.01% pixels is selected as the global atmospheric light estimate a.
5. The real-time preprocessing method of claim 1, wherein:
in step S130, the foggy day images that are processed are all defaulted to non-color-cast images or have undergone white balance processing.
6. The real-time preprocessing method of claim 1, wherein:
step S140 specifically includes:
defining the size of a processing window by taking each pixel to be processed as a central pixel, setting the weight w of a gray value in the processing window which is more than or equal to the gray value of the central pixel as 1, setting the weight w of the gray value in the window which is less than the gray value of the central pixel as w < 1, and expressing the filtering formula as follows:
Figure FDA0002936246020000021
where (x, y) is the coordinates of the filtered pixel, Ω is a window with (x, y) as the center pixel, (i, j) is the pixel within the window, and w (i, j) is weighted as:
Figure FDA0002936246020000022
7. the real-time preprocessing method of claim 1, wherein:
in step S150, for DfilterBlock up-sampling, block obtaining scattered light estimation, and block inverse solution of fog-free picture.
8. The real-time preprocessing method of claim 1, wherein:
in step S150, the specific implementation steps of upsampling are as follows:
source image DfilterThe size of the target image is m multiplied by n, the target image is a multiplied by b, the (p, q) th pixel point of the target image, namely the (p) th row and the q th column of the p row can be corresponding to the source image through a side length ratio, the corresponding coordinates are (p multiplied by m/a, q multiplied by n/b), and the corresponding floating point coordinates are (k + u, l + v), wherein k and l are integer parts of the floating point coordinates, u and v are decimal parts of the floating point coordinates, and a floating point number between [0 and 1) is taken, so that the value S (p and q) of the point can be represented by the source image DfilterThe middle coordinates are determined by the four pixel values corresponding to (k, l), (k +1, l), (k, l +1), (k +1, l +1), i.e.: s (p, q) ═ 1-u × (1-v) × Dfilter(i,j)+(1-u)×v×Dfilter(i,j+1)+u×(1-v)×Dfilter(i+1,j)+u×v×Dfilter(i+1,j+1)。
9. The real-time preprocessing method of claim 8, wherein:
in step S150, the inverse solution fog-free picture is:
Figure FDA0002936246020000031
where J is an image without fog, a (1-t (x, y)) ═ S (x, y), t is the transmittance, a is atmospheric light at infinity, and a (1-t (x, y)) is the scattered light component per pixel.
10. A storage medium capable of being used to store computer-executable instructions, characterized by:
the computer-executable instructions, when executed by a processor, perform a method of real-time pre-processing of video surveillance data as claimed in any one of claims 1 to 9.
CN201811136740.2A 2018-09-28 2018-09-28 Real-time preprocessing method and storage medium for video monitoring data Active CN109345479B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811136740.2A CN109345479B (en) 2018-09-28 2018-09-28 Real-time preprocessing method and storage medium for video monitoring data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811136740.2A CN109345479B (en) 2018-09-28 2018-09-28 Real-time preprocessing method and storage medium for video monitoring data

Publications (2)

Publication Number Publication Date
CN109345479A CN109345479A (en) 2019-02-15
CN109345479B true CN109345479B (en) 2021-04-06

Family

ID=65307046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811136740.2A Active CN109345479B (en) 2018-09-28 2018-09-28 Real-time preprocessing method and storage medium for video monitoring data

Country Status (1)

Country Link
CN (1) CN109345479B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763254B (en) * 2020-06-05 2024-02-02 中移(成都)信息通信科技有限公司 Image processing method, device, equipment and computer storage medium
CN113063432B (en) * 2021-04-13 2023-05-09 清华大学 Visible light visual navigation method in smoke environment
CN114155161B (en) * 2021-11-01 2023-05-09 富瀚微电子(成都)有限公司 Image denoising method, device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663697A (en) * 2012-04-01 2012-09-12 大连海事大学 Enhancement method of underwater color video image
CN104281998A (en) * 2013-07-03 2015-01-14 中山大学深圳研究院 Quick single colored image defogging method based on guide filtering
CN106251301A (en) * 2016-07-26 2016-12-21 北京工业大学 A kind of single image defogging method based on dark primary priori
CN107330870A (en) * 2017-06-28 2017-11-07 北京航空航天大学 A kind of thick fog minimizing technology accurately estimated based on scene light radiation
CN107451966A (en) * 2017-07-25 2017-12-08 四川大学 A kind of real-time video defogging method realized using gray-scale map guiding filtering
CN108492259A (en) * 2017-02-06 2018-09-04 联发科技股份有限公司 A kind of image processing method and image processing system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8340461B2 (en) * 2010-02-01 2012-12-25 Microsoft Corporation Single image haze removal using dark channel priors

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663697A (en) * 2012-04-01 2012-09-12 大连海事大学 Enhancement method of underwater color video image
CN104281998A (en) * 2013-07-03 2015-01-14 中山大学深圳研究院 Quick single colored image defogging method based on guide filtering
CN106251301A (en) * 2016-07-26 2016-12-21 北京工业大学 A kind of single image defogging method based on dark primary priori
CN108492259A (en) * 2017-02-06 2018-09-04 联发科技股份有限公司 A kind of image processing method and image processing system
CN107330870A (en) * 2017-06-28 2017-11-07 北京航空航天大学 A kind of thick fog minimizing technology accurately estimated based on scene light radiation
CN107451966A (en) * 2017-07-25 2017-12-08 四川大学 A kind of real-time video defogging method realized using gray-scale map guiding filtering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于暗原色理论的雾天图像清晰化方法研究;方周;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160415(第04期);第25-26、45-47页 *

Also Published As

Publication number Publication date
CN109345479A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN107527332B (en) Low-illumination image color retention enhancement method based on improved Retinex
WO2016206087A1 (en) Low-illumination image processing method and device
Gao et al. Sand-dust image restoration based on reversing the blue channel prior
CN108765336B (en) Image defogging method based on dark and bright primary color prior and adaptive parameter optimization
Zhang et al. A naturalness preserved fast dehazing algorithm using HSV color space
CN104253930A (en) Real-time video defogging method
CN109816608B (en) Low-illumination image self-adaptive brightness enhancement method based on noise suppression
CN109345479B (en) Real-time preprocessing method and storage medium for video monitoring data
Yang et al. Visibility restoration of single image captured in dust and haze weather conditions
CN112053298B (en) Image defogging method
CN106846258A (en) A kind of single image to the fog method based on weighted least squares filtering
CN108305225A (en) Traffic monitoring image rapid defogging method based on dark channel prior
Park et al. Nighttime image dehazing with local atmospheric light and weighted entropy
CN105023246B (en) A kind of image enchancing method based on contrast and structural similarity
Khan et al. Recent advancement in haze removal approaches
CN110111280A (en) A kind of enhancement algorithm for low-illumination image of multi-scale gradient domain guiding filtering
CN116823686B (en) Night infrared and visible light image fusion method based on image enhancement
CN106709876A (en) Optical remote sensing image defogging method based on the principle of dark pixel
CN111598800A (en) Single image defogging method based on space domain homomorphic filtering and dark channel prior
Chengtao et al. Improved dark channel prior dehazing approach using adaptive factor
Negru et al. Exponential image enhancement in daytime fog conditions
CN115619662A (en) Image defogging method based on dark channel prior
CN111028184B (en) Image enhancement method and system
CN115034985A (en) Underwater image enhancement method
CN114418874A (en) Low-illumination image enhancement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Gao Yuanyuan

Inventor after: Liu Lingzhi

Inventor after: Bai Lifei

Inventor after: Wen Xiuxiu

Inventor before: Gao Yuanyuan

Inventor before: Ma Chao

Inventor before: Pan Bowen

Inventor before: Kang Zilu

Inventor before: Wen Xiuxiu

GR01 Patent grant
GR01 Patent grant