CN107256562A - Image defogging method and device based on binocular vision system - Google Patents
Image defogging method and device based on binocular vision system Download PDFInfo
- Publication number
- CN107256562A CN107256562A CN201710379740.4A CN201710379740A CN107256562A CN 107256562 A CN107256562 A CN 107256562A CN 201710379740 A CN201710379740 A CN 201710379740A CN 107256562 A CN107256562 A CN 107256562A
- Authority
- CN
- China
- Prior art keywords
- msub
- mrow
- image
- sigma
- msup
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a kind of image defogging method based on binocular vision system;It comprises the following steps:Binocular camera camera lens is demarcated, two width original images of acquisition are shot using binocular camera, obtains anaglyph according to two width original images, and mist elimination image is obtained to original image progress bilateral filtering, wherein, using anaglyph as bilateral filtering a restrictive condition.The present invention has merged the effective information of left images, compared to traditional defogging based on single image, using the teaching of the invention it is possible to provide the abundanter result images of relatively sharp, effective information.
Description
Technical field
The present invention relates to a kind of image processing method, more particularly to a kind of image defogging side based on binocular vision system
Method.
Background technology
Binocular stereo vision algorithm is a kind of New Image information gathering algorithm based on two viewpoints, compared to monocular
For vision algorithm, its image information information content collected is more enriched, and the effective information on a certain pixel can reach
Twice before, also Just because of this, design sketch of the target image obtained after algorithm for image clearness than monocular vision
Become apparent from.Certainly, noise jamming is also adding at double relatively.
At present need those skilled in the art urgently to solve a technical problem be:How for using binocular tri-dimensional
The target image that feel system is obtained carries out effective denoising, refines more effective informations.
The content of the invention
In order to solve the above problems, the present invention provides a kind of image defogging method based on binocular vision system, and it is directed to
The target image that binocular vision system is obtained carries out introducing parallax information during defogging, defogging, it is therefore prevented that image procossing mistake
The missing of effective information in journey;The present invention has merged the effective information of left images, compared to traditional based on single image
Defogging, using the teaching of the invention it is possible to provide the abundanter result images of relatively sharp, effective information.
To achieve these goals, the present invention is adopted the following technical scheme that:
A kind of image defogging method based on binocular vision system, comprises the following steps:
Step 1:Binocular camera camera lens is demarcated;
Step 2:Shot using binocular camera and obtain two width original images, disparity map is obtained according to two width original images
Picture;
Step 3:Bilateral filtering is carried out to original image and obtains mist elimination image, wherein, it regard anaglyph as bilateral filtering
A restrictive condition.
The demarcation of the step 1 uses four step calibration algorithms.
In step 1 calibration process, calibration process is modified using the method demarcated twice, using the first deutero-albumose
Determine the correction factor of 1/3rd of distortion factor in result angle point grids when being demarcated as second.
Anaglyph is obtained in the step 2 further comprising the steps of:
Step 2.1:Matching power flow adds up, and specially using adaptive SSD Feature Correspondence Algorithms, and is calculated by SSD
Method is calculated with the similarity measure factor pair Matching power flow based on Gradient Features, and specific formula is;
Wherein, I1(x, y) represents the pixel in left figure, I2(x, y) represents the pixel in right figure, I2(x, y+d) is represented
Matching area of the pixel of right figure in left figure, wherein d is used for the parallax for representing to exist between the two images of left and right, in figure
As I1、I2In, the partial pixel point value in neighborhood is extracted, if the size of the neighborhood is (2n+1,2m+1);NEAnd N (n)W(n) divide
It Biao Shi not include the grid window of a most right row and most previous column in pattern matrix,For representing the level of horizontal direction
Grad,For representing the vertical gradient value in vertical direction;
Step 2.2:The Matching power flow function of preliminary anaglyph is asked for, specific formula is
C (n, d)=σ CGRAD(n, d)+(1- σ) Cssd(x, y, d)
Wherein, σ represents weight.
Bilateral filtering formula is in the step 3:
Wherein, G (x, y) represents the corresponding gray level image of original image, characteristic point M (x0,y0) gray value on this image
It is expressed as GM, N (x1,y1) pixel in characteristic point M surrounding neighbors W is represented, the pixel gray value is expressed as GN, | M-N |
Represent the parallax value of relevant position in anaglyph, TpRepresent normalization factor;Gσr、GσsExpression is the calculation based on Gaussian function
The number factor, GσsIt is the scale factor based on pel spacing, GσrIt is the scale factor based on grey scale pixel value, its expression formula is such as
Under:
Wherein, σsRepresent that the criterion distance based on Gaussian function is poor;σrRepresent the gray standard deviation based on Gaussian function.
Further, the present invention also provides a kind of image demister based on binocular vision system, including with lower module:
Demarcating module, for being demarcated to binocular camera camera lens;
Image collection module, two width original images are obtained for being shot using binocular camera;
Disparity computation module, for obtaining anaglyph according to two width original images;
Defogging module, mist elimination image is obtained for carrying out bilateral filtering to original image, wherein, using anaglyph as double
One restrictive condition of side filtering.
Further, the present invention also provides a kind of binocular vision system, it is characterized in that, including:
Left and right bilateral camera, two width original images are obtained for shooting;
Memory, for storing the computer program for image defogging;
Processor, for performing the computer program on memory, obtains anaglyph according to two width original images first,
Then bilateral filtering is carried out to original image and obtains mist elimination image, wherein, limited anaglyph as one of bilateral filtering
Condition.
Beneficial effects of the present invention:
1st, this paper algorithms can play certain defogging effect to foggy image, be a kind of new defogging mode;
2nd, the present invention is attempted binocular vision system being incorporated into defogging field first, and parallax is introduced during defogging
Information, it is therefore prevented that the missing of effective information in image processing process;
3rd, the present invention has merged the effective information of left images, can compared to traditional defogging based on single image
There is provided relatively sharp, effective information abundanter result images;
4th, application scenario of the present invention extensively, after Integration ofTechnology, may apply to outdoor defogging scene, such as outdoor
Shooting equipment, traffic camera capture equipment etc..
Brief description of the drawings
Fig. 1 carries out the flow chart of color image defogging for the present invention;
Fig. 2 is Binocular Vision Principle figure;
Fig. 3 is Jean-Yves Bouguet Matlab calibration tool case sectional drawings;
Fig. 4 is camera lens picture;
Fig. 5 is chessboard calibration plate;
Fig. 6 is original foggy image;
Fig. 7 is the gray level image of original foggy image;
Fig. 8 is coarse disparity map;
Fig. 9 is picture rich in detail;
Figure 10 is all kinds of Processing Algorithm effect contrast figures;Wherein, 10 (a) is the colored jpg images of input, and 10 (b) is equal
Image after weighing apparatusization, 10 (c) is image after enhancing, and 10 (d) is that Retinex strengthens figure, and 10 (e) is obtained using this paper algorithms
Result figure;
Figure 11 is different haze degree treatment effect comparison diagrams.
Embodiment
The invention will be further described with embodiment below in conjunction with the accompanying drawings.
Fig. 1 is the flow chart that the present invention carries out color image defogging.
A kind of image defogging method based on binocular vision system is present embodiments provided, is comprised the following steps:
Step 1:Binocular camera camera lens is demarcated;
Binocular vision system composition structure is as shown in Figure 2.O1、O2It is Jiao of two video cameras with same physical properties
Point.With O1It is referred to as camera coordinate system for the coordinate system of origin.Below using focus as O1Left side camera exemplified by explain
Bright system is constituted.Zc1It is the optical axis of left side viewpoint, itself and camera coordinates overlapping of axles, the optical axis is perpendicular to image plane, and wherein
Imaging plane (particular location of certain pixel in the width image is represented with specific physical quantity) coordinate system be defined as U1-C1V1。
O1C1=O2C2=f is the focal length of video camera, is tried to achieve by early stage camera calibration.Spatial point P (xw,yw,zw) in xwzwPlane
Projection and Zc1Angle be β1, with Zc2Angle be β2.Point P (xw,yw,zw) where coordinate system be referred to as world coordinate system, use
In it is determined that the position of the pixel in space.
By the comparison to various camera calibration methods, the present invention selects a kind of traditional scaling method-tetra- improved
Walk calibration algorithm.Experiment is completed by tool box Complete Camera Calibration Toolbox for Matlab
Final demarcation, its sectional drawing is as shown in Figure 3.
This paper experimental sections use binocular camera camera lens and calibration process in use scaling board image such as Fig. 4,
5:
Test the scaling board that uses be form for 6*8 chessboard plane, each small lattice area in chessboard
It is all 35mm*35mm.During experimental calibration, the camera of left and right two shoots 25 clearly so that different angles are each to scaling board respectively
Picture, form is JPG forms.
Left mesh inner parameter:
Left mesh focal length fc=[1222.78834 1225.78533] ± [64.96633 64.81035]
Principal point Cc=[393.70913 293.50352] ± [7.87638 9.02687]
The coefficient of skew=[0.00000] ± [0.00000]
Distortion parameter kc=[0.22938-4.04138 0.00319 0.00245 0.00000]
±[0.07385 1.25539 0.00229 0.00218 0.00000]
Pixel error err=[0.44523 0.56214]
Right mesh inner parameter:
Right mesh focal length fc=[1227.41164 1227.22653] ± [65.42971 65.96019]
Principal point Cc=[397.99328 300.72748] ± [7.99174 9.56149]
The coefficient of skew=[0.00000] ± [0.00000]
Distortion parameter kc=[0.23067-3.80194 0.00525 0.00314 0.00000]
±[0.09069 1.26832 0.00247 0.00200 0.00000]
Pixel error err=[0.47226 0.59531]
For the above results, the coefficient of skew of left and right mesh camera is all 0, and this represents x pixels axle and y pixel axles
Between be mutually perpendicular.There is the factor of following two aspects the reason for bigger than normal for pixel error:One is because demarcation plane
By normal printer printing be made, inherently not as film be punched out come scaling board pixel it is clear;Two be due to funds institute
Limit, experimental assembly is not reaching to the requirement of very high accurancy and precision.
Step 2:Shot using binocular camera and obtain two width original images, disparity map is obtained according to two width original images
Picture;
It is clear that binocular vision system is incorporated into haze image by the present invention as a kind of approach for gathering effective image information
In the processing experiment of change.Fig. 6 is that have mist experimental image using the original of binocular camera shooting demarcated.
The step of obtaining anaglyph is as follows:
The present invention use adaptive SSD Feature Correspondence Algorithms, and by SSD (in image sequence respective pixel difference
Quadratic sum, sum of square differences) algorithm matches generation with the similarity measure factor pair based on Gradient Features
Valency is calculated.During matching operation, from right figure I2Figure left figure I is found in (x, y)1The matching area I of (x, y)2(x,
Y+d), wherein d is used for the parallax for representing to exist between the two images of left and right.In image 1,2, the part picture in neighborhood is extracted
Vegetarian refreshments value, if the size of the neighborhood is (2n+1,2m+1), specific formula is as follows.
In formula 2.1, n represents the pixel value of image, and N (n) refers to be looped around the grid window of the 4*4 around n, NEAnd N (n)W
(n) the grid window of a most right row and most previous column is not included in separated image array.For representing the level of horizontal direction
Grad,Then it is used for representing the vertical gradient value in vertical direction.So have:
C (n, d)=σ CGRAD(n,d)+(1-σ)Cssd(x,y,d) (1.3)
σ represents weight.
Two steps complete this work that added up to Matching power flow thereon, and 2.3 formulas are then for asking for preliminary anaglyph
Matching power flow function.A point range is chosen in the picture, and the point of its cascade matching Least-cost is used as corresponding matching
σ in point, formula is determined by WTA (Winner-takes-all) criterion.By the smoothing processing to 2.3 formula results at this
Reason, the preliminary coarse anaglyph needed for being tested.
Preferably, first former RGB image is handled, be reduced to after 2-D gray image (Fig. 7) in MATLAB experiments
Calculated under environment and obtain required preliminary anaglyph (Fig. 8).
Step 3:Bilateral filtering is carried out to original foggy image (generally left figure) and obtains mist elimination image.
Bilateral filtering is a kind of nonlinear filter, and its basic thought is:The picture for needing to be filtered operation for some
Pixel carries out local weighted average processing in the neighbors around of vegetarian refreshments, and this processing is based on the distance difference between pixel
On the basis of consider the restriction of gray value between pixel.
Obtained anaglyph is matched as a limitation bar based on spatial neighbor degree of bilateral filtering by the use of SSD herein
Part, based on the difference of space length between pixel, the gray level image for defining image I is G (x, y), then bilateral filtering formula
It can be expressed as:
Wherein, G (x, y) represents the corresponding gray level image of original image, characteristic point M (x0,y0) gray value on this image
It is expressed as GM, N (x1,y1) pixel in characteristic point M surrounding neighbors W is represented, the pixel gray value is expressed as GN, | M-N |
Represent the parallax value of relevant position in anaglyph, TpRepresent normalization factor;Gσr、GσsExpression is the calculation based on Gaussian function
The number factor, GσsIt is the scale factor based on pel spacing, GσrIt is the scale factor based on grey scale pixel value, its expression formula is such as
Under:
Wherein, σsRepresent that the criterion distance based on Gaussian function is poor;σrRepresent the gray standard deviation based on Gaussian function.
Fig. 9 show the last defog effect figure of experiment.
Defog effect:
The specific experiment step and effect of embodiment one are further described with reference to example.
Binocular vision system is incorporated into foggy image defogging field first herein, and in camera calibration process
It is middle propose it is a kind of new based on Complete Camera Calibration Toolbox for Matlab calibration tool casees
Scaling method.
Camera calibration is carried out to the hardware components (i.e. left and right bilateral cam device) of binocular vision system first herein
Work, experimentation is complete by Complete Camera Calibration Toolbox for Matlab calibration tool casees
Into.In calibration process, innovatively calibration process is modified using the method demarcated twice, demarcated using first time
As a result the correction factor of angle point grid when 1/3rd (replacement random numbers) of middle distortion factor are demarcated as second.Pass through
The comparison being distributed to error range and reprojection error, it was demonstrated that this demarcation mode makes calibration result more accurate really.
Experiment gathers three groups of images of different haze degree, by SSD matching algorithm meters in the environment of video camera has been demarcated
Anaglyph is calculated, and the anaglyph is drawn last picture rich in detail as one group of limitation of bilateral filtering.
Carrying out subjectiveness and objectiveness data analysis to experimental data below, (Figure 10 is that the present invention is gone according to a variety of principles
Mist experiment effect figure, be respectively from left to right:Artwork, histogram equalization design sketch, Wavelet Transformation Algorithm design sketch,
Retinex algorithm design sketch, this paper algorithm effect figures.Figure 11 is the final defog effect figure of three groups of different haze degree):
(1) in subjective assessment, carrying out statistics by the way of survey herein, (network is answered the questions in a test paper and papery test paper topic
Mesh is consistent).
After experiment terminates, experimental image printing is bound into book form, invites 120 classmates to carry out subjectivity in the way of survey and comments
Survey.Primary evaluation content is that the color of this paper algorithms and image after the defog effect directly perceived and defogging of other 4 class algorithms is fresh
Gorgeous degree two indices.Artwork and each algorithm effect figure random number, the same of investigation please be participated in by permutation and combination method in questionnaire
Learn and select good two two more bright-coloured with color degree of defog effect according to the visual sense of oneself successively.And in questionnaire
Three width figures (all correspond to has mist artwork with it) in last random alignment Figure 10, ask classmate to arrange defog effect.It is
The result of survey is counted in table 2-1.Percentage represents to select the number accounting of this.
On subjective index result collect for:(1) 100% classmate thinks that this paper algorithms have a defog effect, but effect
There is no other algorithms such as Retinex to become apparent from.When selecting which width image defog effect more preferable, this paper algorithm probabilities of occurrence are
36.7% (having 44 classmates).(2) for the statistics in degree one bright in luster, the classmate of 58.3% (having 70)
Think that the image after this paper algorithm process effectively maintains the color information of image.(3) it is several for three groups of figures in Figure 10
Whole classmates thinks treatment effect when this algorithm is better than thick fog for the treatment effect in the case of mist.
All kinds of defogging algorithm subjective evaluation index unit % of table 2-1
(2) in objective evaluation index, we carry out objective meter by objective data to image after all kinds of processing in Figure 10
Point counting is analysed.Obtained every evaluation index classification is calculated to be set forth in table 2-2.
Analyze objective data.
The size of variance yields can represent picture contrast roughly, and it is directly proportional to contrast size, can from the value
Go out, effect of wavelet is relatively good, and this paper algorithms are slightly better than artwork image quality.
The value of average gradient (Meangradient) one represents rich, the image layer when value is bigger of image level
Secondary abundanter, image quality is more clear, can be seen that the effect of Retinex algorithm is preferable from the value, this paper algorithms are better than wavelet transformation
Algorithm.
Comentropy can for representing the abundant degree of color, in this column data this paper algorithms better than wavelet transformation and
Retinex algorithms.
Edge strength is used for describing the intensity of the textural characteristics of physical edge in image, and the value is with image definition into just
Than it can be seen that Retinex algorithm has best treatment effect from the columns value, this paper algorithms are calculated better than wavelet transformation
Method.
All kinds of defogging algorithm objective evaluation indexs of table 2-2
It is demonstrated experimentally that the mature technology binocular vision system in three-dimensional reconstruction field is incorporated into foggy image defogging field
The realistic feasibility come.Also, because this experiment can be to the width foggy image of collection two of object scene, effective letter in theory
Breath becomes twice, is more beneficial for carrying out high-quality defogging.Come from the analysis of above-mentioned subjective index and objective indicator
See, this paper algorithms can carry out certain defogging to foggy image really, but be grasped in defog effect and specific defogging
Make to still have improved space in step.
It will be understood by those skilled in the art that above-mentioned each module of the invention or each step can use general computer
Device realized, alternatively, and the program code that they can be can perform with computing device be realized, it is thus possible to they are deposited
Storage performed in the storage device by computing device, either they are fabricated to respectively each integrated circuit modules or by it
In multiple modules or step single integrated circuit module is fabricated to realize.The present invention is not restricted to any specific hard
The combination of part and software.
Although above-mentioned the embodiment of the present invention is described with reference to accompanying drawing, not to present invention protection model
The limitation enclosed, one of ordinary skill in the art should be understood that on the basis of technical scheme, those skilled in the art
Various modifications or deform still within protection scope of the present invention that creative work can make need not be paid.
Claims (7)
1. a kind of image defogging method based on binocular vision system, it is characterized in that, comprise the following steps:
Step 1:Binocular camera camera lens is demarcated;
Step 2:Shot using binocular camera and obtain two width original images, anaglyph is obtained according to two width original images;
Step 3:Bilateral filtering is carried out to original image and obtains mist elimination image, wherein, it regard anaglyph as the one of bilateral filtering
Individual restrictive condition.
2. a kind of image defogging method based on binocular vision system as claimed in claim 1, it is characterized in that, the step 1
Demarcation use four step calibration algorithms.
3. a kind of image defogging method based on binocular vision system as claimed in claim 1, it is characterized in that, the step 1
In calibration process, calibration process is modified using the method demarcated twice, using distortion factor in first time calibration result
1/3rd angle point grids when being demarcated as second correction factor.
4. a kind of image defogging method based on binocular vision system as claimed in claim 1, it is characterized in that, the step 2
Middle acquisition anaglyph is further comprising the steps of:
Step 2.1:Matching power flow add up, specially using adaptive SSD Feature Correspondence Algorithms, and by SSD algorithms with
Similarity measure factor pair Matching power flow based on Gradient Features is calculated, and specific formula is;
<mrow>
<msub>
<mi>C</mi>
<mrow>
<mi>s</mi>
<mi>s</mi>
<mi>d</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>,</mo>
<mi>d</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msubsup>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mo>-</mo>
<mi>n</mi>
</mrow>
<mi>n</mi>
</msubsup>
<msubsup>
<mi>&Sigma;</mi>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mo>-</mo>
<mi>m</mi>
</mrow>
<mi>m</mi>
</msubsup>
<msup>
<mrow>
<mo>&lsqb;</mo>
<msub>
<mi>I</mi>
<mn>1</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>+</mo>
<mi>i</mi>
<mo>,</mo>
<mi>y</mi>
<mo>+</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mi>I</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>+</mo>
<mi>i</mi>
<mo>,</mo>
<mi>y</mi>
<mo>+</mo>
<mi>j</mi>
<mo>+</mo>
<mi>d</mi>
<mo>)</mo>
</mrow>
<mo>&rsqb;</mo>
</mrow>
<mn>2</mn>
</msup>
</mrow>
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<msub>
<mi>C</mi>
<mrow>
<mi>G</mi>
<mi>R</mi>
<mi>A</mi>
<mi>D</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>,</mo>
<mi>d</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<msub>
<mi>&Sigma;</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
<mo>&Element;</mo>
<msub>
<mi>N</mi>
<mi>E</mi>
</msub>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msub>
<mo>|</mo>
<msub>
<mo>&dtri;</mo>
<mi>E</mi>
</msub>
<msub>
<mi>I</mi>
<mn>1</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mo>&dtri;</mo>
<mi>E</mi>
</msub>
<msub>
<mi>I</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>+</mo>
<mi>d</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>+</mo>
<msub>
<mi>&Sigma;</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
<mo>&Element;</mo>
<msub>
<mi>N</mi>
<mi>W</mi>
</msub>
<mo>(</mo>
<mi>n</mi>
<mo>)</mo>
</mrow>
</msub>
<mo>|</mo>
<msub>
<mo>&dtri;</mo>
<mi>W</mi>
</msub>
<msub>
<mi>I</mi>
<mn>1</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>-</mo>
<msub>
<mo>&dtri;</mo>
<mi>W</mi>
</msub>
<msub>
<mi>I</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>+</mo>
<mi>d</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
Wherein, I1(x, y) represents the pixel in left figure, I2(x, y) represents the pixel in right figure, I2(x, y+d) represents right figure
Matching area of the pixel in left figure, wherein d is used for the parallax for representing to exist between the two images of left and right, in image I1、I2
In, the partial pixel point value in neighborhood is extracted, if the size of the neighborhood is (2n+1,2m+1);NEAnd N (n)W(n) figure is represented respectively
Grid window as not including a most right row and most previous column in array,For representing the horizontal gradient value of horizontal direction,
For representing the vertical gradient value in vertical direction;
Step 2.2:The Matching power flow function of preliminary anaglyph is asked for, specific formula is
C (n, d)=σ CGRAD(n,d)+(1-σ)Cssd(x,y,d)
Wherein, σ represents weight.
5. a kind of image defogging method based on binocular vision system as claimed in claim 1, it is characterized in that, the step 3
Middle bilateral filtering formula is:
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>H</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<msub>
<mi>T</mi>
<mi>p</mi>
</msub>
</mfrac>
<msub>
<mi>&Sigma;</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>&Element;</mo>
<mi>W</mi>
</mrow>
</msub>
<msub>
<mi>G</mi>
<mrow>
<mi>&sigma;</mi>
<mi>s</mi>
</mrow>
</msub>
<msub>
<mi>G</mi>
<mrow>
<mi>&sigma;</mi>
<mi>r</mi>
</mrow>
</msub>
<msub>
<mi>G</mi>
<mi>N</mi>
</msub>
<mo>|</mo>
<mi>M</mi>
<mo>-</mo>
<mi>N</mi>
<mo>|</mo>
<mo>|</mo>
<msub>
<mi>G</mi>
<mi>M</mi>
</msub>
<mo>-</mo>
<msub>
<mi>G</mi>
<mi>N</mi>
</msub>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>=</mo>
<mfrac>
<mn>1</mn>
<msub>
<mi>T</mi>
<mi>p</mi>
</msub>
</mfrac>
<msub>
<mi>&Sigma;</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
<mo>&Element;</mo>
<mi>W</mi>
</mrow>
</msub>
<msub>
<mi>G</mi>
<mrow>
<mi>&sigma;</mi>
<mi>s</mi>
</mrow>
</msub>
<mo>|</mo>
<mi>M</mi>
<mo>-</mo>
<mi>N</mi>
<mo>|</mo>
<msub>
<mi>G</mi>
<mrow>
<mi>&sigma;</mi>
<mi>r</mi>
</mrow>
</msub>
<mo>|</mo>
<msub>
<mi>G</mi>
<mi>M</mi>
</msub>
<mo>-</mo>
<msub>
<mi>G</mi>
<mi>N</mi>
</msub>
<mo>|</mo>
<msub>
<mi>G</mi>
<mi>N</mi>
</msub>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mrow>
<msub>
<mi>T</mi>
<mi>p</mi>
</msub>
<mo>=</mo>
<msub>
<mi>&Sigma;</mi>
<mrow>
<mi>N</mi>
<mo>&Element;</mo>
<mi>W</mi>
</mrow>
</msub>
<msub>
<mi>G</mi>
<mrow>
<mi>&sigma;</mi>
<mi>s</mi>
</mrow>
</msub>
<mo>|</mo>
<mi>M</mi>
<mo>-</mo>
<mi>N</mi>
<mo>|</mo>
<msub>
<mi>G</mi>
<mrow>
<mi>&sigma;</mi>
<mi>r</mi>
</mrow>
</msub>
<mo>|</mo>
<msub>
<mi>G</mi>
<mi>M</mi>
</msub>
<mo>-</mo>
<msub>
<mi>G</mi>
<mi>N</mi>
</msub>
<mo>|</mo>
<msub>
<mi>G</mi>
<mi>N</mi>
</msub>
</mrow>
Wherein, G (x, y) represents the corresponding gray level image of original image, characteristic point M (x0,y0) gray value on this image represents
For GM, N (x1,y1) pixel in characteristic point M surrounding neighbors W is represented, the pixel gray value is expressed as GN, | M-N | expression is regarded
The parallax value of relevant position, T on difference imagepRepresent normalization factor;Gσr、GσsExpression is the factor that counts based on Gaussian function,
GσsIt is the scale factor based on pel spacing, GσrIt is the scale factor based on grey scale pixel value, its expression formula is as follows:
<mrow>
<msub>
<mi>G</mi>
<mrow>
<mi>&sigma;</mi>
<mi>s</mi>
</mrow>
</msub>
<mo>|</mo>
<msub>
<mi>x</mi>
<mn>0</mn>
</msub>
<mo>-</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>|</mo>
<mo>=</mo>
<msup>
<mi>e</mi>
<mrow>
<mo>-</mo>
<mo>&lsqb;</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>x</mi>
<mn>0</mn>
</msub>
<mo>-</mo>
<msub>
<mi>x</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>+</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>y</mi>
<mn>0</mn>
</msub>
<mo>-</mo>
<msub>
<mi>y</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>&rsqb;</mo>
<mo>/</mo>
<mn>2</mn>
<msubsup>
<mi>&sigma;</mi>
<mi>s</mi>
<mn>2</mn>
</msubsup>
</mrow>
</msup>
</mrow>
<mrow>
<msub>
<mi>G</mi>
<mrow>
<mi>&sigma;</mi>
<mi>r</mi>
</mrow>
</msub>
<mo>|</mo>
<msub>
<mi>G</mi>
<mi>m</mi>
</msub>
<mo>-</mo>
<msub>
<mi>G</mi>
<mi>n</mi>
</msub>
<mo>|</mo>
<mo>=</mo>
<msup>
<mi>e</mi>
<mrow>
<mo>-</mo>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>G</mi>
<mi>m</mi>
</msub>
<mo>-</mo>
<msub>
<mi>G</mi>
<mi>n</mi>
</msub>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
<mo>/</mo>
<mn>2</mn>
<msubsup>
<mi>&sigma;</mi>
<mi>r</mi>
<mn>2</mn>
</msubsup>
</mrow>
</msup>
</mrow>
Wherein, σsRepresent that the criterion distance based on Gaussian function is poor;σrRepresent the gray standard deviation based on Gaussian function.
6. a kind of image demister based on binocular vision system, it is characterized in that, including with lower module:
Demarcating module, for being demarcated to binocular camera camera lens;
Image collection module, two width original images are obtained for being shot using binocular camera;
Disparity computation module, for obtaining anaglyph according to two width original images;
Defogging module, mist elimination image is obtained for carrying out bilateral filtering to original image, wherein, it regard anaglyph as bilateral filter
One restrictive condition of ripple.
7. a kind of binocular vision system, it is characterized in that, including:
Left and right bilateral camera, two width original images are obtained for shooting;
Memory, for storing the computer program for image defogging;
Processor, for performing the computer program on memory, obtains anaglyph, then according to two width original images first
To original image carry out bilateral filtering obtain mist elimination image, wherein, using anaglyph as bilateral filtering a restrictive condition.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710379740.4A CN107256562A (en) | 2017-05-25 | 2017-05-25 | Image defogging method and device based on binocular vision system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710379740.4A CN107256562A (en) | 2017-05-25 | 2017-05-25 | Image defogging method and device based on binocular vision system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107256562A true CN107256562A (en) | 2017-10-17 |
Family
ID=60027984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710379740.4A Pending CN107256562A (en) | 2017-05-25 | 2017-05-25 | Image defogging method and device based on binocular vision system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107256562A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111583131A (en) * | 2020-04-16 | 2020-08-25 | 天津大学 | Defogging method based on binocular image |
CN112306064A (en) * | 2020-11-04 | 2021-02-02 | 河北省机电一体化中试基地 | RGV control system and method for binocular vision identification |
CN113487516A (en) * | 2021-07-26 | 2021-10-08 | 河南师范大学 | Defogging processing method for image data |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103868460A (en) * | 2014-03-13 | 2014-06-18 | 桂林电子科技大学 | Parallax optimization algorithm-based binocular stereo vision automatic measurement method |
-
2017
- 2017-05-25 CN CN201710379740.4A patent/CN107256562A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103868460A (en) * | 2014-03-13 | 2014-06-18 | 桂林电子科技大学 | Parallax optimization algorithm-based binocular stereo vision automatic measurement method |
Non-Patent Citations (4)
Title |
---|
WANLAPHA PHUMMARA ETC.: ""An optimal performance investigation for bilateral filter under four different iamge types"", 《INTERNATIONAL CONFERENCE ON SIGNAL-IMAGE TECHNOLOGY & INTERNET-BASED SYSTEMS》 * |
宋菲菲等: ""基于双目立体视觉的图像增强"", 《微型机与应用》 * |
张东香等: ""基于双目视觉算法的图像清晰化算法研究"", 《山东师范大学学报(自然科学版)》 * |
陈龙等: ""基于联合双边滤波的单幅图像去雾"", 《北京邮电大学学报》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111583131A (en) * | 2020-04-16 | 2020-08-25 | 天津大学 | Defogging method based on binocular image |
CN111583131B (en) * | 2020-04-16 | 2022-08-05 | 天津大学 | Defogging method based on binocular image |
CN112306064A (en) * | 2020-11-04 | 2021-02-02 | 河北省机电一体化中试基地 | RGV control system and method for binocular vision identification |
CN113487516A (en) * | 2021-07-26 | 2021-10-08 | 河南师范大学 | Defogging processing method for image data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108932693B (en) | Face editing and completing method and device based on face geometric information | |
CN107767413A (en) | A kind of image depth estimation method based on convolutional neural networks | |
CN110211043B (en) | Registration method based on grid optimization for panoramic image stitching | |
CN111738314B (en) | Deep learning method of multi-modal image visibility detection model based on shallow fusion | |
CN101877143B (en) | Three-dimensional scene reconstruction method of two-dimensional image group | |
CN101610425B (en) | Method for evaluating stereo image quality and device | |
CN104811693B (en) | A kind of stereo image vision comfort level method for objectively evaluating | |
CN105118048A (en) | Method and device for identifying copying certificate image | |
CN102750731B (en) | Based on the remarkable computing method of stereoscopic vision of the simple eye receptive field in left and right and binocular fusion | |
CN107635136A (en) | View-based access control model is perceived with binocular competition without with reference to stereo image quality evaluation method | |
CN109801215A (en) | The infrared super-resolution imaging method of network is generated based on confrontation | |
US8629868B1 (en) | Systems and methods for simulating depth of field on a computer generated display | |
CN109300096A (en) | A kind of multi-focus image fusing method and device | |
Ding et al. | U 2 D 2 Net: Unsupervised unified image dehazing and denoising network for single hazy image enhancement | |
CN109242834A (en) | It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method | |
CN107256562A (en) | Image defogging method and device based on binocular vision system | |
CN112004078B (en) | Virtual reality video quality evaluation method and system based on generation countermeasure network | |
CN109118544A (en) | Synthetic aperture imaging method based on perspective transform | |
CN113810611B (en) | Data simulation method and device for event camera | |
CN111696049A (en) | Deep learning-based underwater distorted image reconstruction method | |
CN107360416A (en) | Stereo image quality evaluation method based on local multivariate Gaussian description | |
CN109345552A (en) | Stereo image quality evaluation method based on region weight | |
CN110691236B (en) | Panoramic video quality evaluation method | |
Chen et al. | Focus manipulation detection via photometric histogram analysis | |
CN109978928A (en) | A kind of binocular vision solid matching method and its system based on Nearest Neighbor with Weighted Voting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171017 |
|
RJ01 | Rejection of invention patent application after publication |