CN109345479A - A kind of real-time preprocess method and storage medium of video monitoring data - Google Patents
A kind of real-time preprocess method and storage medium of video monitoring data Download PDFInfo
- Publication number
- CN109345479A CN109345479A CN201811136740.2A CN201811136740A CN109345479A CN 109345479 A CN109345479 A CN 109345479A CN 201811136740 A CN201811136740 A CN 201811136740A CN 109345479 A CN109345479 A CN 109345479A
- Authority
- CN
- China
- Prior art keywords
- pixel
- sampling
- filter
- real
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000012544 monitoring process Methods 0.000 title claims abstract description 14
- 238000005070 sampling Methods 0.000 claims abstract description 42
- 238000001914 filtration Methods 0.000 claims abstract description 23
- 238000003384 imaging method Methods 0.000 claims abstract description 12
- 238000005457 optimization Methods 0.000 claims description 4
- 230000001133 acceleration Effects 0.000 claims description 3
- 239000004615 ingredient Substances 0.000 claims description 3
- 230000001788 irregular Effects 0.000 abstract description 5
- 238000011084 recovery Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000000903 blocking effect Effects 0.000 description 3
- 230000010287 polarization Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000002146 bilateral effect Effects 0.000 description 2
- 239000003595 mist Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 244000020518 Carthamus tinctorius Species 0.000 description 1
- 235000003255 Carthamus tinctorius Nutrition 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
A kind of real-time preprocess method and storage medium of video monitoring data calculate the dark primary D of down-sampling this method comprises: obtaining smallest passage figuredown, after which is converted to grayscale image, descending sort is carried out according to the gray value of pixel, chooses a certain proportion of high luminance pixel, estimate A for average value as global atmosphere light;To the dark primary D of down-samplingdownThe filtering processing for promoted local smaller value, the down-sampling dark primary D after being optimizedfilter;To DfilterUp-sampling, the scattering light for obtaining each position of video are estimated, according to preset greasy weather imaging model, estimate that A is counter using scattering light estimation S and global atmosphere light and solve the fogless picture of t frame.The present invention obtains the dark primary of down-sampling, reduces operand, improves runing time, to DfilterPiecemeal up-sampling estimation scattering light, can the anti-demand for solving fogless picture, reducing to memory headroom of piecemeal, be more suitable the irregular situation of equipment under environment of internet of things.
Description
Technical field
This application involves a kind of video data handling procedures, particularly, are related to a kind of video prison towards such as Internet of Things
Control the real-time preprocess method of data.
Background technique
Internet of Things monitoring be a kind of common safety precaution means, it using sensor network it is intuitive to video, it is accurate and
When the information content carry out linkage reaction.However, current monitoring system is very sensitive to weather condition.In particular, in visibility
Under the conditions of the lower greasy weather, observed object is smudgy, and color fidelity is decreased obviously.Therefore, it is monitored to improve Internet of Things
System is to the adaptability under harsh weather, it is necessary to pre-process to Internet of Things greasy weather video, to reach promotion visual effect
Purpose.
Currently, according to whether the method by existing video defogging that can be rough is divided into two major classes using greasy weather imaging model:
Video enhancement method based on non-imaged model and the video recovery method based on imaging model.
Than more typical and video enhancement method commonly based on non-imaged model include histogram equalization, wavelet transformation,
Retinex etc..Histogram equalization method realizes that simple, the speed of service is fast, but easily reduces grey scale pixel value, and treated
Video will appear detailed information loss.Wavelet method and Retinex method are carried out at enhancing to the non-low-frequency sub-block of video
Reason, thus more prominent details.However, they do not consider the physical cause that video degrades, therefore there is limitation, Zhi Nengyi
The improvement video visual effect for determining degree, can not achieve defogging truly.
Video recovery based on imaging model is substantially the related ginseng from atmospherical scattering model, solving model
Number obtains the fogless clear video of scene, so as to improve image quality.Such methods specifically include that method based on polarization characteristic,
Method based on depth information and the method based on priori knowledge.Method based on polarization obtains Same Scene using polarizing film
Two width or multiple image of different polarization degree need the support of hardware, promote and apply critical constraints.Method based on depth information
It is the depth information for obtaining scene, to estimate that the three-dimensional structure of scene recovers clear video.Such method will be to a certain degree
User interactive operation, can not accomplish to automatically process.Method based on priori knowledge utilizes the rule or hypothesis of partial statistics
It estimates variable, and then counter solves fogless video.Such methods have physics validity, implement simply, to obtain ideal
Restoration result.In practical applications, the greasy weather video recovery method based on priori knowledge is widely used.It is being based on priori knowledge
Greasy weather video recovery method among, to be recognized be to work as forward sight to the algorithm based on dark primary priori knowledge that He Kaiming et al. is proposed
The best algorithm of frequency defog effect.Dark primary priori principle shows: the outdoor image under fair weather, in non-sky area,
At least one Color Channel brightness is extremely low, levels off to 0.According to dark primary priori principle, the minimum of Misty Image regional area
Value can be used as the estimation of scattering light variable.Then, fogless picture can be obtained according to imaging model.But this method used
Blocking artifact can be had by obtaining scattering light using mini-value filtering in journey, therefore proposition is scattered He Kaiming with soft pick figure algorithm again
The refinement of light, Time & Space Complexity are all very high, it is difficult to realize that greasy weather video is handled in real time.
In order to improve the efficiency of algorithm, many researchers propose to replace soft pick figure algorithm with quick filter.Median filtering draws
It leads filtering and quickly bilateral filtering is all once used to optimize scattering light estimation.But median filtering does not have the characteristic for protecting edge, not
Blocking artifact can be completely removed, there can be halation after video defogging.Although guiding filtering and quick bilateral filtering have preferable protect
Local edge, and the speed of service is fast, but they exchange the time for using space, space consuming is bigger, it is difficult to meet Internet of Things
Device hardware configures irregular environment.
It is therefore proposed that a kind of speed is fast, and space consuming is few, the video pre-filtering method of strong real-time, may be implemented pair
The real-time defogging of video, convenient for using in the internet of things environment, the technical issues of becoming prior art urgent need to resolve.
Summary of the invention
It is an object of the invention to propose the real-time preprocess method and storage medium of a kind of video monitoring data, Ke Yishi
The real-time defogging of existing video makes video become clear.This method realizes that speed is fast, and the space occupied is small, convenient in Internet of Things ring
It is used in border.
To achieve this purpose, the present invention adopts the following technical scheme:
A kind of real-time preprocess method of video monitoring data, includes the following steps:
Smallest passage obtaining step S110: 1 frame is cached to the video sequence of input, is expressed as t frame, and each to the frame
Tri- channels R, G, B of pixel are compared, and take minimum value therein, to obtain smallest passage figure Imin;
Dark primary calculates step S120: to the smallest passage figure I of t frameminCarry out the division of non-overlap sub-block, every height
It is minimized in block, calculates smallest passage figure IminDown-sampling dark primary Ddown;
Global atmosphere light estimating step S130: after t frame Misty Image is converted to grayscale image, according to the gray value of pixel
Descending sort is carried out, a certain proportion of high luminance pixel is chosen, estimates A for average value as global atmosphere light;
Local smaller value lifting step S140: to the dark primary D of the down-samplingdownIn each pixel gradually mentioned
Rise the filtering processing of local smaller value, the down-sampling dark primary D after being optimizedfilter;
Anti- solution step S150: to the down-sampling dark primary D after the optimizationfilterUp-sampling, obtains each position of t frame
Scattering light estimate S (x, y), it is anti-using scattering light estimation S and global atmosphere light estimation A according to preset greasy weather imaging model
Solve the fogless picture of t frame.
Optionally, in step s 110, Imin(x, y)=min (IR(x,y),IG(x,y),IB(x, y)),
Wherein, (x, y) is the coordinate position of pixel, and R, G, B indicate that three Color Channels, I are the video frame of caching.
Optionally, in the step s 120, the radius N of non-overlap sub-block is set as 1/40 left side of the wide and high minimum value of video frame
It is right;
Also, when the width of video frame or high and non-overlap sub-block be not at integral multiple, to it by way of mirror image
It is filled, to carry out flared end processing to video.
Optionally, in step s 130, the average value for choosing the gray value of preceding 0.01% pixel is estimated as global atmosphere light
Count A.
Optionally, in step s 130, the image handled is defaulted as non-colour cast image or has been subjected to white balance
Processing.
Optionally, step S140 specifically:
Using each pixel to be processed as center pixel, processing window size delimited, gray value in processing window is greater than etc.
It is set to 1 in the weight w of center pixel gray value, the weight that gray value is less than center pixel gray value in window is set to w < 1.It should
Filtering Formula is expressed as follows:
Wherein, (x, y) is the coordinate of filtered pixel, and Ω is the window with (x, y) for center pixel, and (i, j) is in window
Pixel, the setting of w (i, j) weight are as follows:
Optionally, in step S150, to DfilterPiecemeal up-sampling, piecemeal obtain scattering light estimation, and then piecemeal is counter solves
Fogless picture.
Optionally, in step S150, the specific implementation step of up-sampling are as follows:
Source images DfilterSize is m × n, and target image is a × b, (p, q) a pixel of target image, i.e. pth
Row q column, can correspond to source images by side ratio, and respective coordinates are (p × m/a, q × n/b), corresponding floating-point coordinate (k
+ u, l+v), wherein k, l are the integer part of floating-point coordinate, and u, v are the fractional part of floating-point coordinate, value [0,1) between
Floating number, then the value S (p, q) of this point can be by source images DfilterMiddle coordinate is (k, l), (k+1, l), (k, l+1), (k+1, l+
1) four pixel values corresponding to determine, it may be assumed that S (p, q)=(1-u) × (1-v) × Dfilter(i,j)+(1-u)×v×Dfilter
(i,j+1)+u×(1-v)×Dfilter(i+1,j)+u×v×Dfilter(i+1,j+1)。
It is optionally, counter to solve fogless picture in step S150 are as follows:
Wherein J is fogless image, and A (1-t (x, y))=S (x, y), t are transmissivity, and A is the atmosphere light of infinite point, A
(1-t (x, y)) is each pixel scattering light ingredient.
The invention also discloses a kind of storage medium, which can be used for storing computer executable instructions,
The computer executable instructions execute the real-time pretreatment of the video monitoring data of above-mentioned acceleration when being executed by processor
Method.
The present invention has the advantage that:
(1) in the step s 120, the present invention does not have the big dark primary such as point-by-point sliding window acquisition and video, but obtains
The dark primary of down-sampling, to reduce operand when carrying out the filtering processing of step S140, improve runing time.
(2) in step S150, the present invention is to DfilterPiecemeal up-sampling estimation scattering light solves fogless picture to which piecemeal is counter
Face reduces the demand to memory headroom, is more suitable the irregular situation of device configuration in environment of internet of things.
Detailed description of the invention
Fig. 1 is the flow chart of the real-time preprocess method of the video monitoring data of specific embodiment according to the present invention;
Fig. 2 be according to the present invention specific embodiment calculated in dark primary obtaining step down-sampling dark primary signal
Figure;
Fig. 3 is the defog effect of specific embodiment according to the present invention, and Fig. 3 (a) is greasy weather picture, and Fig. 3 (b) is after defogging
Video pictures.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining the present invention rather than limiting the invention.It also should be noted that in order to just
Only the parts related to the present invention are shown in description, attached drawing rather than entire infrastructure.
To solve the problems, such as to occur in video monitoring atomizing effect, video quality is especially improved in the greasy weather, the present invention
Essentially consist in: every frame is divided into is not overlapped sub-block first, seek the dark primary of down-sampling;Again in the dark primary of down-sampling
Each pixel promoted the filtering processing of local smaller value;Then, the down-sampling dark primary after optimization is up-sampled
The estimation of scattering light is obtained, and carries out the picture recovery of video frame using single scattering imaging model.The present invention realizes video and goes
Mist, and this method computational efficiency is high, it is small to take up space, obtained video pictures fidelity is high.
Specifically, showing the real-time preprocess method of the video monitoring data of the specific embodiment of the invention referring to Fig. 1
Specific steps:
Smallest passage obtaining step S110: 1 frame is cached to the video sequence of input, is expressed as t frame, and each to the frame
Tri- channels R, G, B of pixel are compared, and take minimum value therein, to obtain smallest passage figure Imin,
Wherein, Imin(x, y)=min (IR(x,y),IG(x,y),IB(x,y))。
Wherein, (x, y) is the coordinate position of pixel, and R, G, B indicate that three Color Channels, I are the video frame of caching.
For example, (R.G.B) is respectively (120.250.168) in some pixel, then the 120 red generation as the pixel is taken
Table carries out the operation for each pixel, to obtain smallest passage figure Imin。
Each frame of video is the three-dimensional matrice formed by tri- channels R, G, B, and each channel is a two-dimensional matrix.
The present invention calculates the smallest passage figure I of image firstmin。
Dark primary calculates step S120: to the smallest passage figure I of t frameminCarry out the division of non-overlap sub-block, every height
It is minimized in block, calculates smallest passage figure IminDown-sampling dark primary Ddown;
For the dark primary, most images are constantly present at least one in non-sky area, any part fritter
The pixel value of a pixel, its some or several channels is very low, and levels off to zero.The presence of dark primary rule, mainly because
To usually there will be local shades, black object or surface chromatic colour in life scenic picture.Even greenweed, safflower
Such scene, blue channel also can be very low.However, dark primary can be by the white light in atmosphere in the picture interfered by the greasy weather
It is rendered into canescence, intensity is higher.Therefore, the dark primary of greasy weather video frame can be used as the scattering light part for participating in imaging.
Referring to fig. 2, it shows smallest passage and divides the schematic diagram for calculating the dark primary of down-sampling.In order to improve computational efficiency
With diminution occupied space, the method for calculating dark primary with sliding window pixel-by-pixel is different, and the present invention uses and divides smallest passage
For the sub-block of non-overlap, the dark primary for calculating down-sampling is minimized in each sub-block.
The radius N of non-overlap sub-block generally has relationship with the size of video frame picture, in a specific embodiment, greatly
It is small to be traditionally arranged to be 1/40 or so of the wide and high minimum value of video frame.In addition, the width for working as video frame or high and non-overlap sub-block are not
When at integral multiple, flared end processing need to be carried out to it, the present invention is filled it by way of mirror image.
For step S110 and S120, video image to be treated directly directly can be divided into nonoverlapping son
Block is minimized completion in each sub-block.But efficiency is not so good as to directly adopt step S110 and step S120.
Global atmosphere light estimating step S130: after t frame Misty Image is converted to grayscale image, according to the gray value of pixel
Descending sort is carried out, a certain proportion of high luminance pixel is chosen, estimates A for average value as global atmosphere light.
Maximum gradation value is typically considered the approximation of atmosphere light A in Misty Image.In order to avoid the influence of noise, originally
After Misty Image is converted to grayscale image in invention, descending sort is carried out according to the gray value of pixel, in an optional implementation
In example, estimation of the average value of the gray value of preceding 0.01% pixel as global atmosphere light A is chosen.
In one particular embodiment of the present invention, the Misty Image handled is defaulted as non-colour cast image or
It is handled by white balance.
Local smaller value lifting step S140: to the dark primary D of the down-samplingdownIn each pixel gradually mentioned
Rise the filtering processing of local smaller value, the down-sampling dark primary D after being optimizedfilter;
The dark primary of down-sampling is obtained for step S120, theoretically passes through direct up-sampling treatment, in conjunction with imaging mould
Type can eliminate fog and then restore color fog free images true to nature.But this method assumes part scattering light with constant
Characteristic is minimized acquisition scattering light estimation by part, and will lead to the coarse estimation of scattering light has apparent blocking artifact, in scape
This coarse estimation method in the region jumped deeply cannot be guaranteed to scatter the Geometry edge that light meets the original depth of field.
And the depth of field change so that mistiness degree generate acute variation, due to dark primary theory using the minimum value of regional area as
The coarse estimation of light is scattered, therefore rough interface scattering light will generate a false edge.The edge generate essence be because are as follows:
The edge that far and near scape has a common boundary, it is less than normal in the scattering light value estimation of depth of field large area.For this purpose, the present invention is using amendment partial zones
The filtering processing of domain smaller value, obtains Dfilter, thus to down-sampling dark primary DdownLocal smaller value carry out it is a degree of
It is promoted.
Specifically, the promotion of local smaller value is similar to mean filter, specially centered on each pixel to be processed
Pixel delimit processing window size, will handle the weight w that gray value is more than or equal to center pixel gray value in window and is set to 1, window
The weight that gray value is less than center pixel gray value in mouthful is set to w < 1.The Filtering Formula is expressed as follows:
Wherein, (x, y) is the coordinate of filtered pixel, and Ω is the window with (x, y) for center pixel, and (i, j) is in window
Pixel, the setting of w (i, j) weight are as follows:
In this step, usually since the upper left angle point of down-sampling dark primary, gradually glide downwards to the right, until completing
Filtering processing to all pixels.For the point at edge, window to be processed can be delimited in such a way that mirror image expands.
In the present invention, odd number of pixels point, such as 7*7 or 9*9 pixel are typically chosen in for the size of window
The size of point.
Therefore, when depth of field variation is not violent, this filtering is similar to mean filter.And when depth of field variation is violent, this filter
Wave can suitably enhance the scattering light value underestimated.Wherein, w (i, j) is smaller, and hoisting power is stronger.
In this step, inhibit local smaller value filtering processing can by all guarantor's edge filters, wave filter or
Associated filters are replaced.But the filtering method complexity of the latter is higher.
Anti- solution step S150: to the down-sampling dark primary D after the optimizationfilterUp-sampling, obtains each position of t frame
Scattering light estimate S (x, y), it is anti-using scattering light estimation S and global atmosphere light estimation A according to preset greasy weather imaging model
Solve the fogless picture of t frame.
Theoretically, to DfilterCarry out up-sampling treatment, the big dark primary such as formation and each frame, as estimating for scattering light
S is counted, the fogless picture of t frame counter can be solved according to greasy weather imaging model in conjunction with global atmosphere light A.However, for memory
The lesser equipment of processing space, for example, the hardware device based on dsp operation, is difficult to realize.Therefore, the present invention is preferably pair
DfilterPiecemeal up-sampling, piecemeal obtain scattering light estimation, and then the anti-method for solving fogless picture of piecemeal is completed.
In this step, the specific implementation step of up-sampling are as follows:
Source images DfilterSize is m × n, and target image is a × b, (p, q) a pixel of target image, i.e. pth
Row q column, can correspond to source images by side ratio, and respective coordinates are (p × m/a, q × n/b), corresponding floating-point coordinate (k
+ u, l+v), wherein k, l are the integer part of floating-point coordinate, and u, v are the fractional part of floating-point coordinate, value [0,1) between
Floating number, then the value S (p, q) of this point can be by source images DfilterMiddle coordinate is (k, l), (k+1, l), (k, l+1), (k+1, l+
1) four pixel values corresponding to determine, it may be assumed that S (p, q)=(1-u) × (1-v) × Dfilter(i,j)+(1-u)×v×Dfilter
(i,j+1)+u×(1-v)×Dfilter(i+1,j)+u×v×Dfilter(i+1,j+1)。
Common greasy weather model are as follows:
I (x, y)=J (x, y) t (x, y)+A (1-t (x, y))
Wherein, (x, y) is the coordinate position of pixel, and J is fogless image, and t is transmissivity, and A is the atmosphere of infinite point
Light, A (1-t (x, y)) are each pixel scattering light ingredients.
Wherein A (1-t (x, y))=S (x, y), then:
The invention also discloses a kind of storage medium, which can be used for storing computer executable instructions,
The computer executable instructions execute the real-time pretreatment of the video monitoring data of above-mentioned acceleration when being executed by processor
Method.
Embodiment 1:
Referring to Fig. 3, specific embodiment defog effect figure according to the present invention is shown, wherein Fig. 3 (a) is greasy weather picture, Fig. 3
It (b) is the video pictures after defogging, it can be seen that the present invention realizes defogging, and operand is low, wants to memory headroom
It asks not high, is more suitable the irregular situation of device configuration in environment of internet of things.
The present invention has the advantage that:
(1) in the step s 120, the present invention does not have the big dark primary such as point-by-point sliding window acquisition and video, but obtains
The dark primary of down-sampling, to reduce operand when carrying out the filtering processing of step S140, improve runing time.
(2) in step S150, the present invention is to DfilterPiecemeal up-sampling solves nothing as the estimation of scattering light, to which piecemeal is counter
Mist picture reduces the demand to memory headroom, is more suitable the irregular situation of device configuration in environment of internet of things.
Obviously, it will be understood by those skilled in the art that above-mentioned each unit of the invention or each step can be with general
Computing device realizes that they can concentrate on single computing device, and optionally, they can be executable with computer installation
Program code realize, be performed by computing device so as to be stored in storage device, or by they point
It is not fabricated to each integrated circuit modules, or makes multiple modules or steps in them to single integrated circuit module
It realizes.In this way, the present invention is not limited to the combinations of any specific hardware and software.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be said that
A specific embodiment of the invention is only limitted to this, for those of ordinary skill in the art to which the present invention belongs, is not taking off
Under the premise of from present inventive concept, several simple deduction or replace can also be made, all shall be regarded as belonging to the present invention by institute
Claims of submission determine protection scope.
Claims (10)
1. a kind of real-time preprocess method of video monitoring data, includes the following steps:
Smallest passage obtaining step S110: 1 frame is cached to the video sequence of input, is expressed as t frame, and to each pixel of the frame
Tri- channels R, G, B be compared, minimum value therein is taken, to obtain smallest passage figure Imin;
Dark primary calculates step S120: to the smallest passage figure I of t frameminThe division for carrying out non-overlap sub-block, in each sub-block
It is minimized, calculates smallest passage figure IminDown-sampling dark primary Ddown;
Global atmosphere light estimating step S130: it after t frame Misty Image is converted to grayscale image, is carried out according to the gray value of pixel
A certain proportion of high luminance pixel is chosen in descending sort, estimates A for average value as global atmosphere light;
Local smaller value lifting step S140: to the dark primary D of the down-samplingdownIn each pixel gradually carry out promotion office
The filtering processing of portion's smaller value, the down-sampling dark primary D after being optimizedfilter;
Anti- solution step S150: to the down-sampling dark primary D after the optimizationfilterUp-sampling obtains dissipating for each position of t frame
It penetrates light and estimates S (x, y), according to preset greasy weather imaging model, estimate that A is counter using scattering light estimation S and global atmosphere light and solve
The fogless picture of t frame.
2. real-time preprocess method according to claim 1, it is characterised in that:
In step s 110, Imin(x, y)=min (IR(x,y),IG(x,y),IB(x, y)),
Wherein, (x, y) is the coordinate position of pixel, and R, G, B indicate that three Color Channels, I are the video frame of caching.
3. real-time preprocess method according to claim 1, it is characterised in that:
In the step s 120, the radius N of non-overlap sub-block is set as 1/40 or so of the wide and high minimum value of video frame;
Also, when the width of video frame or high and non-overlap sub-block be not at integral multiple, it is carried out by way of mirror image
Filling, to carry out flared end processing to video.
4. real-time preprocess method according to claim 1, it is characterised in that:
In step s 130, the average value for choosing the gray value of preceding 0.01% pixel estimates A as global atmosphere light.
5. real-time preprocess method according to claim 1, it is characterised in that:
In step s 130, the Misty Image handled is defaulted as non-colour cast image or has been subjected to white balance processing.
6. real-time preprocess method according to claim 1, it is characterised in that:
Step S140 specifically:
Using each pixel to be processed as center pixel, processing window size delimited, it will be during gray value be more than or equal in processing window
The weight w of heart grey scale pixel value is set to 1, and the weight that gray value is less than center pixel gray value in window is set to w < 1, the filtering
Formula is expressed as follows:
Wherein, (x, y) is the coordinate of filtered pixel, and Ω is the window with (x, y) for center pixel, and (i, j) is pixel in window,
The setting of w (i, j) weight are as follows:
7. real-time preprocess method according to claim 1, it is characterised in that:
In step S150, to DfilterPiecemeal up-sampling, piecemeal obtain scattering light estimation, and then piecemeal is counter solves fogless picture.
8. real-time preprocess method according to claim 1, it is characterised in that:
In step S150, the specific implementation step of up-sampling are as follows:
Source images DfilterSize is m × n, and target image is a × b, (p, q) a pixel of target image, i.e. pth row q
Column, can correspond to source images by side ratio, and respective coordinates are (p × m/a, q × n/b), corresponding floating-point coordinate (k+u, l+
V), wherein k, l are the integer part of floating-point coordinate, and u, v are the fractional part of floating-point coordinate, value [0,1) between floating-point
Number, then the value S (p, q) of this point can be by source images DfilterMiddle coordinate is (k, l), (k+1, l), (k, l+1), (k+1, l+1) institute
Corresponding four pixel values determine, it may be assumed that S (p, q)=(1-u) × (1-v) × Dfilter(i,j)+(1-u)×v×Dfilter(i,j+
1)+u×(1-v)×Dfilter(i+1,j)+u×v×Dfilter(i+1,j+1)。
9. real-time preprocess method according to claim 8, it is characterised in that:
It is counter to solve fogless picture in step S150 are as follows:
Wherein J is fogless image, and A (1-t (x, y))=S (x, y), t are transmissivity, and A is the atmosphere light of infinite point, A (1-t
(x, y)) it is each pixel scattering light ingredient.
10. a kind of storage medium, which can be used for storing computer executable instructions, it is characterised in that:
Computer executable instructions perform claim when being executed by processor requires acceleration described in any one of 1-9
The real-time preprocess method of video monitoring data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811136740.2A CN109345479B (en) | 2018-09-28 | 2018-09-28 | Real-time preprocessing method and storage medium for video monitoring data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811136740.2A CN109345479B (en) | 2018-09-28 | 2018-09-28 | Real-time preprocessing method and storage medium for video monitoring data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109345479A true CN109345479A (en) | 2019-02-15 |
CN109345479B CN109345479B (en) | 2021-04-06 |
Family
ID=65307046
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811136740.2A Active CN109345479B (en) | 2018-09-28 | 2018-09-28 | Real-time preprocessing method and storage medium for video monitoring data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109345479B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113063432A (en) * | 2021-04-13 | 2021-07-02 | 清华大学 | Visible light visual navigation method in smoke environment |
CN113763254A (en) * | 2020-06-05 | 2021-12-07 | 中移(成都)信息通信科技有限公司 | Image processing method, device and equipment and computer storage medium |
CN114155161A (en) * | 2021-11-01 | 2022-03-08 | 富瀚微电子(成都)有限公司 | Image denoising method and device, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110188775A1 (en) * | 2010-02-01 | 2011-08-04 | Microsoft Corporation | Single Image Haze Removal Using Dark Channel Priors |
CN102663697A (en) * | 2012-04-01 | 2012-09-12 | 大连海事大学 | Enhancement method of underwater color video image |
CN104281998A (en) * | 2013-07-03 | 2015-01-14 | 中山大学深圳研究院 | Quick single colored image defogging method based on guide filtering |
CN106251301A (en) * | 2016-07-26 | 2016-12-21 | 北京工业大学 | A kind of single image defogging method based on dark primary priori |
CN107330870A (en) * | 2017-06-28 | 2017-11-07 | 北京航空航天大学 | A kind of thick fog minimizing technology accurately estimated based on scene light radiation |
CN107451966A (en) * | 2017-07-25 | 2017-12-08 | 四川大学 | A kind of real-time video defogging method realized using gray-scale map guiding filtering |
CN108492259A (en) * | 2017-02-06 | 2018-09-04 | 联发科技股份有限公司 | A kind of image processing method and image processing system |
-
2018
- 2018-09-28 CN CN201811136740.2A patent/CN109345479B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110188775A1 (en) * | 2010-02-01 | 2011-08-04 | Microsoft Corporation | Single Image Haze Removal Using Dark Channel Priors |
CN102663697A (en) * | 2012-04-01 | 2012-09-12 | 大连海事大学 | Enhancement method of underwater color video image |
CN104281998A (en) * | 2013-07-03 | 2015-01-14 | 中山大学深圳研究院 | Quick single colored image defogging method based on guide filtering |
CN106251301A (en) * | 2016-07-26 | 2016-12-21 | 北京工业大学 | A kind of single image defogging method based on dark primary priori |
CN108492259A (en) * | 2017-02-06 | 2018-09-04 | 联发科技股份有限公司 | A kind of image processing method and image processing system |
CN107330870A (en) * | 2017-06-28 | 2017-11-07 | 北京航空航天大学 | A kind of thick fog minimizing technology accurately estimated based on scene light radiation |
CN107451966A (en) * | 2017-07-25 | 2017-12-08 | 四川大学 | A kind of real-time video defogging method realized using gray-scale map guiding filtering |
Non-Patent Citations (1)
Title |
---|
方周: "基于暗原色理论的雾天图像清晰化方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113763254A (en) * | 2020-06-05 | 2021-12-07 | 中移(成都)信息通信科技有限公司 | Image processing method, device and equipment and computer storage medium |
CN113763254B (en) * | 2020-06-05 | 2024-02-02 | 中移(成都)信息通信科技有限公司 | Image processing method, device, equipment and computer storage medium |
CN113063432A (en) * | 2021-04-13 | 2021-07-02 | 清华大学 | Visible light visual navigation method in smoke environment |
CN114155161A (en) * | 2021-11-01 | 2022-03-08 | 富瀚微电子(成都)有限公司 | Image denoising method and device, electronic equipment and storage medium |
CN114155161B (en) * | 2021-11-01 | 2023-05-09 | 富瀚微电子(成都)有限公司 | Image denoising method, device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109345479B (en) | 2021-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016206087A1 (en) | Low-illumination image processing method and device | |
CN102750674B (en) | Video image defogging method based on self-adapting allowance | |
CN103218778B (en) | The disposal route of a kind of image and video and device | |
KR101756173B1 (en) | Image dehazing system by modifying the lower-bound of transmission rate and method therefor | |
US10367976B2 (en) | Single image haze removal | |
CN106157270B (en) | A kind of single image rapid defogging method and system | |
KR101426298B1 (en) | apparatus and method for compensating image for enhancing fog removing efficiency | |
CN110544213A (en) | Image defogging method based on global and local feature fusion | |
KR101917094B1 (en) | Fast smog and dark image improvement method and apparatus by using mapping table | |
CN109345479A (en) | A kind of real-time preprocess method and storage medium of video monitoring data | |
CN115578297A (en) | Generalized attenuation image enhancement method for self-adaptive color compensation and detail optimization | |
CN106355560A (en) | Method and system for extracting atmospheric light value in haze image | |
CN117611501A (en) | Low-illumination image enhancement method, device, equipment and readable storage medium | |
CN113034379A (en) | Weather-time self-adaptive rapid image sharpening processing method | |
CN115187472A (en) | Dark channel prior defogging method based on tolerance | |
Sharma et al. | Single Image Dehazing and Non-uniform Illumination Enhancement: AZ-Score Approach | |
Negru et al. | Exponential image enhancement in daytime fog conditions | |
CN113284058B (en) | Underwater image enhancement method based on migration theory | |
CN109636735A (en) | A kind of fast video defogging method based on space-time consistency constraint | |
CN108596856A (en) | A kind of image defogging method and device | |
CN111028184B (en) | Image enhancement method and system | |
CN118333902B (en) | Method, system, equipment and medium for clearing underwater non-uniform illumination image | |
KR102141122B1 (en) | Method for removing fog and apparatus therefor | |
CN114757835A (en) | Weather-time self-adaptive rapid image sharpening processing method | |
Kumari et al. | Fast and efficient contrast enhancement for real time video dehazing and defogging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Gao Yuanyuan Inventor after: Liu Lingzhi Inventor after: Bai Lifei Inventor after: Wen Xiuxiu Inventor before: Gao Yuanyuan Inventor before: Ma Chao Inventor before: Pan Bowen Inventor before: Kang Zilu Inventor before: Wen Xiuxiu |
|
GR01 | Patent grant | ||
GR01 | Patent grant |