CN104935911B - A kind of method and device of high dynamic range images synthesis - Google Patents
A kind of method and device of high dynamic range images synthesis Download PDFInfo
- Publication number
- CN104935911B CN104935911B CN201410101591.1A CN201410101591A CN104935911B CN 104935911 B CN104935911 B CN 104935911B CN 201410101591 A CN201410101591 A CN 201410101591A CN 104935911 B CN104935911 B CN 104935911B
- Authority
- CN
- China
- Prior art keywords
- pixel
- image
- value
- disparity values
- hole
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/15—Processing image signals for colour aspects of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
Abstract
The embodiments of the invention provide a kind of method and device of high dynamic range images synthesis, it is related to image processing field, the quality to improve high dynamic range images.Methods described, including:Obtain the first image and the second image;Binocular solid is carried out to described first image and second image to match, and obtains disparity map;According to disparity map and the first image, synthesis has the virtual view of same view angle with the second image;Second gray level image is obtained according to the second image, and virtual view gray level image is obtained according to virtual view;According to the second gray level image and virtual view gray level image, by HDR composition algorithm, HDR gray-scale map is obtained;According to HDR gray-scale map, the second gray level image, virtual view gray level image, the second image and virtual view, high dynamic range images are obtained.The embodiment of the present invention is applied to the scene that high dynamic range images are synthesized.
Description
Technical field
The present invention relates to the method and device of image processing field, more particularly to a kind of synthesis of high dynamic range images.
Background technology
High dynamic range images are the time for exposure by adjusting camera, when different expose is carried out repeatedly to same scene
Between shooting, and by image composing technique, the image of several different exposure times merge the image of acquisition.Wherein,
The image of time exposure has the details of clearly dark areas, and the image of short time exposure has the details of clearly bright area.Phase
Than common image, high dynamic range images can provide more dynamic ranges and image detail, can preferably reflect true
Real environment.
Existing high dynamic range images synthetic technology is broadly divided into two classes:The first kind is one camera high dynamic range images
Synthesis;Equations of The Second Kind is the synthesis of polyphaser high dynamic range images.
In polyphaser high dynamic range images synthetic technology, cameras multiple first are using different exposure time to same thing
Body shoots and obtains multiple images simultaneously, two images is then chosen from the multiple image, then, according to described two images
Corresponding points between relation, obtain the disparity map of described two images, and then, according to the disparity map and two photographs
Piece, an image in two images is synthesized to the virtual image at the visual angle of another image, finally, according to the virtual image
Final high dynamic range images are obtained with the image at another visual angle.
During stating the synthesis of polyphaser high dynamic range images in realization, inventor has found at least to deposit in the prior art
In following problem:Due to when exposure is than in the case of larger, obtaining disparity map to the depth extraction of dark and overly bright region excessively not
It is enough accurate, can final high dynamic range images be brought with noise, and due to carrying out virtual image synthesis in the prior art
When, it only make use of the neighborhood information of present image to enter row interpolation and then result in high dynamic range images and there is aberration, further
It has impact on the quality of high dynamic range images.
The content of the invention
Embodiments of the invention provide a kind of method and device of high dynamic range images synthesis, to improve high dynamic range
Enclose the quality of image.
To reach above-mentioned purpose, embodiments of the invention are adopted the following technical scheme that:
In a first aspect, the embodiments of the invention provide a kind of method of high dynamic range images synthesis, including:Obtain first
Image and the second image;Described first image is while shooting using different exposures to same object from second image
Arrive;Binocular solid is carried out to described first image and second image to match, and obtains disparity map;According to the disparity map with
Described first image, synthesis has the virtual view of same view angle with second image;Second is obtained according to second image
Gray level image, and virtual view gray level image is obtained according to the virtual view;According to second gray level image and the void
Intend view gray level image, by HDR composition algorithm, obtain HDR gray-scale map;According to the HDR
Gray-scale map, second gray level image, the virtual view gray level image, second image and the virtual view, are obtained
To high dynamic range images.
In the first possible implementation of first aspect, methods described also includes:Described according to the parallax
Figure and described first image, when synthesis has the virtual view of same view angle with second image, by the virtual view
The pixel of occlusion area is labeled as hole pixel;The occlusion area is described first image and second image to same thing
The region that the angle difference that body is shot is produced;Or, described according to the disparity map and described first image, synthesis with it is described
Second image has after the virtual view of same view angle, and according to second image the second gray level image, and root are obtained described
Before obtaining virtual view gray level image according to the virtual view, methods described also includes:By the noise in the virtual view
Pixel or the occlusion area are labeled as hole pixel;The noise pixel is that parallax value calculates mistake in the disparity map
What pixel was produced;It is described virtual view gray level image is obtained according to the virtual view to include:According to mark hole pixel
Virtual view obtains marking the virtual view gray level image of hole pixel;It is described according to second gray level image and the void
Intend view gray level image, by HDR composition algorithm, obtaining HDR gray-scale map includes:According to the described second ash
Image and the virtual view gray level image of the mark hole pixel are spent, by HDR composition algorithm, is marked
The HDR gray-scale map of hole pixel;It is described according to the HDR gray-scale map, second gray level image, institute
Virtual view gray level image, second image and the virtual view are stated, obtaining high dynamic range images includes:According to institute
State the mark HDR gray-scale map of hole pixel, second gray level image, the mark hole pixel it is virtual
The virtual view of view gray level image, second image and the mark hole pixel, obtains marking hole pixel
High dynamic range images;Described according to the HDR gray-scale map, second image and virtual view ash
Image is spent, is obtained after high dynamic range images, methods described also includes:In second image, determine it is described mark have
Corresponding first pixel of each hole pixel in the high dynamic range images of hole pixel;Obtain in the high dynamic range images
The adjacent pixel of each hole pixel, with the similarity factor between the adjacent pixel of first pixel;And according to the similitude
Coefficient and first pixel, obtain the pixel value of each hole pixel in the high dynamic range images.
With reference to the first possible implementation of first aspect or first aspect, second in first aspect is possible
In implementation, described that described first image and second image progress binocular solid are matched, obtaining disparity map includes:Obtain
Take the candidate disparity values set of each pixel of described first image;Wherein, at least two are included in the candidate disparity values set
Individual candidate disparity values;Corresponding described according to each pixel of described first image, with each pixel of described first image
The candidate disparity values set of each pixel of pixel and described first image in two images, obtains the every of described first image
The matching ENERGY E of each candidate disparity values in the candidate disparity values set of individual pixeld(p,di);Wherein, p represents pixel p,
It is the pixel of described first image corresponding with the candidate disparity values set;The diRepresent i-th of candidate's parallax of pixel p
Value, i=1 ... ..., k;The k for pixel p candidate disparity values set in candidate disparity values sum;According to described first image
Each pixel candidate disparity values set in each candidate disparity values matching ENERGY Ed(p,di), obtain described first
The parallax value of each pixel in image;The parallax value of each pixel in described first image is combined the acquisition parallax
Figure.
With reference to second of possible implementation of first aspect, in the third possible implementation of first aspect
In, it is described according to each pixel of described first image, second image corresponding with each pixel of described first image
In pixel and described first image each pixel candidate disparity values set, obtain each pixel of described first image
Candidate disparity values set in each candidate disparity values matching ENERGY Ed(p,di) include:According to the candidate of the pixel p
Each candidate disparity values in parallax set, utilize formula
The pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,
di);Wherein, the value of the first fitting parameter a and the second fitting parameter b value is make described to match ENERGY Ed(p,di) for most
Corresponding value during small value;W (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);The first block of pixels ΩpTable
Show the block of pixels that the pixel p is included in described first image;The pixel q is adjacent with the pixel p to belong to institute
State the first block of pixels ΩpIn pixel;The I1(q) pixel q pixel value is represented;The I2(q-di) represent q pairs of the pixel
Pixel q-d in second image answerediPixel value;The wc(p,q,di) represent pixel weight value;The ws(p,q,
di) represent distance weighting value;The wd(p,q,di) represent parallax weighted value.
With reference to the third possible implementation of first aspect, in the 4th kind of possible implementation of first aspect
In, the pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp[-β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
Wherein, the I1(p) pixel value of the pixel p is represented;The I2(p-di) represent the corresponding institute of the pixel p
State the pixel p-d in the second imageiPixel value;The first weight coefficient β1, the second weight coefficient β2, the described 3rd
Weight coefficient β3And the 4th weight coefficient β4It is value set in advance.
With reference to second of possible implementation of first aspect, in the 5th kind of possible implementation of first aspect
In, it is described according to each pixel of described first image, second image corresponding with each pixel of described first image
In pixel and described first image each pixel candidate disparity values set, obtain each pixel of described first image
Candidate disparity values set in each candidate disparity values matching ENERGY Ed(p,di) include:According to the candidate of the pixel p
Each candidate disparity values in parallax set, utilize formula
The pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,
di);
Wherein, w (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
The pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp[-β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain
;
The I '1(p)=I1(p)cosθ-I2(p-di)sinθ;The I '2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
The I '1(q)=I1(q)cosθ-I2(q-di)sinθ;
The I '2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
The adjustment angle θ is to preset the value more than 0 ° and less than 90 °.
With reference to second to the 5th any possible implementation of first aspect, the 6th kind in first aspect is possible
In implementation, each candidate disparity values in the candidate disparity values set according to each pixel of described first image
Matching ENERGY Ed(p,di), obtaining the parallax value of each pixel in described first image includes:According to formula
Obtain each candidate disparity values d that will make in the pixel p candidate disparity values setiCandidate energies E (di) be
During minimum value, the candidate disparity values of each pixel are defined as the parallax value of each pixel in corresponding described first image;
Wherein, the I represents described first image;The second block of pixels NpRepresent to include the pixel p in described first image
One block of pixels;The Vp,q(di,dj)=λ×min(|di-dj|,Vmax);The djPixel q j-th candidates parallax value is represented,
j=1,……,m;The sum of candidate disparity values in the candidate disparity values set that the m is pixel q;The smoothing factor λ is advance
The value of setting;The difference maximum V of the adjacent pixel parallaxmaxIt is value set in advance.
With reference to first aspect or first to the 6th any possible implementation of first aspect, the of first aspect
In seven kinds of possible implementations, it is described according to the HDR gray-scale map, second gray level image, described virtually regard
Figure gray level image, second image and the virtual view, obtaining high dynamic range images includes:Formula is utilized successively
And
Try to achieve the red color component value I of each pixel in the high dynamic range imagesred(e), green component values Igreen(e)
And blue color component value Iblue(e);Wherein, the e represents the pixel e in the high dynamic range images;It is describedIgrey(e) pixel corresponding with the pixel e in the HDR gray-scale map is represented
Pixel value,The pixel value of the pixel corresponding with the pixel e in second gray level image is represented,
Represent the pixel value of the pixel corresponding with the pixel e in the virtual view gray level image;It is describedIt is describedAnd it is describedBe illustrated respectively in the red color component value of pixel corresponding with the pixel e in second image,
Green component values and blue color component value;It is describedIt is describedAnd it is describedIt is illustrated respectively in described virtual
Red color component value, green component values and the blue color component value of pixel corresponding with the pixel e in view;According to the high dynamic
Red color component value, green component values and the blue color component value of each pixel in range image, obtain the HDR figure
The pixel value of each pixel as in;The pixel value combination of each pixel in the high dynamic range images is obtained into the height
Dynamic image.
With reference to first to the 7th any possible implementation of first aspect, the 8th kind in first aspect is possible
In implementation, the noise pixel by the virtual view includes labeled as hole pixel:In second image,
Determine at least two second pixels;Second pixel refers to pixel value identical pixel;According in second image
At least two second pixel, obtains at least two mark pixels in the virtual view;In the virtual view extremely
Few two marks pixels be in the virtual view, it is right respectively with least two second pixel in second image
The pixel answered;Obtain the average pixel value of at least two mark pixels in the virtual view;Determine described virtually to regard successively
Whether the difference at least two mark pixels in figure between the pixel value and the average pixel value of each mark pixel
More than the noise threshold;If the difference between the pixel value and the average pixel value of the mark pixel is more than described make an uproar
Glottis limit value, then be defined as noise pixel, and the noise pixel is labeled as into hole pixel by the mark pixel.
With reference to first to the 8th any possible implementation of first aspect, the 9th kind in first aspect is possible
In implementation, for the similarity factor and first pixel according to any hole pixel r in the high dynamic range images,
Obtaining described hole pixel r pixel value includes:According to formulaObtain described hole pixel r picture
Element value;Wherein, the I (r) represents described hole pixel r pixel value;The I2(r) represent in second image with institute
State the pixel value of the corresponding pixels of hole pixel r;The anRepresent described hole pixel r similarity factor;N=0,
1 ... ... N;The N is value set in advance.
With reference to the 9th kind of possible implementation of first aspect, in the tenth kind of possible implementation of first aspect
In, the adjacent pixel for obtaining any of high dynamic range images hole pixel r is adjacent with first pixel
Similarity factor between pixel includes:According to formula
Any of described high dynamic range images hole pixel r adjacent pixel is obtained, the phase with first pixel
Similarity factor between adjacent pixel;Wherein, the s represents the neighborhood Ψ of pixel r described in the high dynamic range imagesrIn one
Individual pixel;The I (s) represents the pixel value of the pixel s;The I2(s) represent in second image with the pixel s
The pixel value of corresponding pixel;The r-s represents the distance between the pixel r and the pixel s;The γ is to preset
, represent the weight coefficient of the distance between the pixel r and pixel s.
With reference to the 9th kind of possible implementation of first aspect, in a kind of the tenth possible implementation of first aspect
In, the adjacent pixel for obtaining any of high dynamic range images hole pixel r is adjacent with first pixel
Similarity factor between pixel includes:According to formula
Any of described high dynamic range images hole pixel r adjacent pixel is obtained, the phase with first pixel
Similarity factor between adjacent pixel;Wherein, the first proportionality coefficient ρ1With the second proportionality coefficient ρ2It is value set in advance;The s is represented
The neighborhood Φ of pixel r described in the high dynamic range imagesrIn a pixel;The A represents the HDR figure
Picture;The a 'nRepresent the similarity factor obtained when first time calculating the pixel value of hole pixel.
With reference to the 9th kind of possible implementation of first aspect, in the 12nd kind of possible implementation of first aspect
In, the adjacent pixel for obtaining any of high dynamic range images hole pixel r is adjacent with first pixel
Similarity factor between pixel includes:Determine whether described hole pixel r has the first hole pixel;The first hole pixel is institute
The hole pixel of pixel value has been obtained in the adjacent holes pixel for stating hole pixel r;If it is determined that having the first hole pixel, then
Using the similarity factor of the first hole pixel as described hole pixel r similarity factor.
Second aspect, the embodiments of the invention provide a kind of method for synthesizing disparity map, including:Obtain the first image and
Two images;Described first image shoots and obtained simultaneously with second image to same object;Obtain described first image
Each pixel candidate disparity values set;Wherein, at least two candidate disparity values are included in the candidate disparity values set;Root
Pixel according to each pixel of described first image, with each pixel of described first image in corresponding second image,
And the candidate disparity values set of each pixel of described first image, obtain candidate's parallax of each pixel of described first image
The matching ENERGY E of each candidate disparity values in value setd(p,di);Wherein, p represents pixel p, is and candidate's parallax
The pixel of the corresponding described first image of value set;The diRepresent i-th of candidate disparity values of pixel p, i=1 ... ..., k;Institute
State the sum of candidate disparity values in the candidate disparity values set that k is pixel p;According to the candidate of each pixel of described first image
The matching ENERGY E of each candidate disparity values in parallax value setd(p,di), obtain each pixel in described first image
Parallax value;The parallax value of each pixel in described first image is combined the acquisition disparity map.
In the first possible implementation of second aspect, each pixel according to described first image, with
The time of each pixel of pixel and described first image in corresponding second image of each pixel of described first image
Parallax value set is selected, each candidate disparity values in the candidate disparity values set for each pixel for obtaining described first image
Match ENERGY Ed(p,di) include:According to each candidate disparity values in candidate's parallax set of the pixel p, formula is utilized
The pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,
di);Wherein, the value of the first fitting parameter a and the second fitting parameter b value is make described to match ENERGY Ed(p,di) for most
Corresponding value during small value;W (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);The first block of pixels ΩpTable
Show the block of pixels that the pixel p is included in described first image;The pixel q is adjacent with the pixel p to belong to institute
State the first block of pixels ΩpIn pixel;The I1(q) pixel q pixel value is represented;The I2(q-di) represent q pairs of the pixel
Pixel q-d in second image answerediPixel value;The wc(p,q,di) represent pixel weight value;The ws(p,q,
di) represent distance weighting value;The wd(p,q,di) represent parallax weighted value.
With reference to the first possible implementation of second aspect, in second of possible implementation of second aspect
In, the pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp[-β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
Wherein, the I1(p) pixel value of the pixel p is represented;The I2(p-di) represent the corresponding institute of the pixel p
State the pixel p-d in the second imageiPixel value;The first weight coefficient β1, the second weight coefficient β2, the described 3rd
Weight coefficient β3And the 4th weight coefficient β4It is value set in advance.
It is described according to described first image in the third possible implementation of second aspect with reference to second aspect
Each pixel, pixel and described first image in second image corresponding with each pixel of described first image
Each pixel candidate disparity values set, it is each in the candidate disparity values set for each pixel for obtaining described first image
The matching ENERGY E of individual candidate disparity valuesd(p,di) include:Each candidate in candidate's parallax set of the pixel p regards
Difference, utilizes formula
The pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,
di);
Wherein, w (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
The pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp[-β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain
;
The I '1(p)=I1(p)cosθ-I2(p-di)sinθ;
The I '2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
The I '1(q)=I1(q)cosθ-I2(q-di)sinθ;
The I '2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
The adjustment angle θ is to preset the value more than 0 ° and less than 90 °.
With reference to second aspect or first to the 3rd any possible implementation of second aspect, the of second aspect
Each in four kinds of possible implementations, in the candidate disparity values set according to each pixel of described first image
The matching ENERGY E of candidate disparity valuesd(p,di), obtaining the parallax value of each pixel in described first image includes:According to formula
Obtain each candidate disparity values d that will make in the pixel p candidate disparity values setiCandidate energies E (di) be
During minimum value, the candidate disparity values of each pixel are defined as the parallax value of each pixel in corresponding described first image;
Wherein, the I represents described first image;The second block of pixels NpRepresent to include the pixel p in described first image
One block of pixels;The Vp,q(di,dj)=λ×min(|di-dj|,Vmax);The djPixel q j-th candidates parallax value is represented,
j=1,……,m;The sum of candidate disparity values in the candidate disparity values set that the m is pixel q;The smoothing factor λ is advance
The value of setting;The difference maximum V of the adjacent pixel parallaxmaxIt is value set in advance.
The third aspect, the embodiments of the invention provide a kind of high dynamic range images synthesis device, including:Acquiring unit,
For obtaining the first image and the second image;Described first image is to same thing using different exposures from second image
Body shoots what is obtained simultaneously;Parallax processing unit, for the described first image and described second obtained to the acquiring unit
Image carries out binocular solid matching, obtains disparity map;Virtual view synthesis unit, for being obtained according to the parallax processing unit
The described first image that obtains of the disparity map and the acquiring unit, synthesis has the void of same view angle with second image
Intend view;Gray scale extraction unit, second image for being obtained according to the acquiring unit obtains the second gray level image, and
Virtual view gray level image is obtained according to the virtual view that the virtual view unit is synthesized;HDR fusion is single
Member, for second gray level image and the virtual view gray level image obtained according to the gray scale extraction unit, passes through
HDR composition algorithm, obtains HDR gray-scale map;Color interpolation unit, for according to HDR ash
Figure, second gray level image, the virtual view gray level image, second image and the virtual view are spent, is obtained
High dynamic range images.
In the first possible implementation of the third aspect, in addition to:Hole pixel processing unit:Described hole picture
Plain processing unit, for the noise pixel in the virtual view or the occlusion area to be labeled as into hole pixel;It is described to hide
Gear region is the region of described first image generation different from the angle that second image is shot to same object;The noise
Pixel is that the pixel of the parallax value calculating mistake in the disparity map is produced;The gray scale extraction unit, specifically for basis
The virtual view of mark hole pixel obtains marking the virtual view gray level image of hole pixel;The HDR melts
Unit is closed, specifically for the virtual view gray level image according to second gray level image and the mark hole pixel, is led to
HDR composition algorithm is crossed, obtains marking the HDR gray-scale map of hole pixel;The color interpolation unit, tool
Body is used to be had according to HDR gray-scale map, second gray level image, the mark of the mark hole pixel
The virtual view of the virtual view gray level image of hole pixel, second image and the mark hole pixel, is marked
Remember the high dynamic range images of hole pixel;Described hole pixel processing unit, is additionally operable in second image, it is determined that
Corresponding first pixel of each hole pixel in the high dynamic range images of the mark hole pixel;At described hole pixel
Unit is managed, is additionally operable to obtain the adjacent pixel of each hole pixel in the high dynamic range images, with first pixel
Similarity factor between adjacent pixel;And according to the likeness coefficient and first pixel, obtain the HDR figure
The pixel value of each hole pixel as in.
With reference to the first possible implementation of the third aspect or the third aspect, second in the third aspect is possible
In implementation, the parallax processing unit includes:Acquisition module, computing module, determining module, composite module;It is described to obtain
Module, the candidate disparity values set of each pixel for obtaining described first image;Wherein, in the candidate disparity values set
Include at least two candidate disparity values;The computing module, for each pixel according to described first image and described first
The candidate disparity values of each pixel of pixel and described first image in corresponding second image of each pixel of image
Set, the matching energy of each candidate disparity values in the candidate disparity values set for each pixel for obtaining described first image
Amount;Wherein, p represents pixel p, is the pixel of described first image corresponding with the candidate disparity values set;The expression picture
Plain p i-th of candidate disparity values, i=1 ... ..., k;The k for pixel p candidate disparity values set in candidate disparity values it is total
Number;The determining module, for each candidate in the candidate disparity values set according to each pixel of described first image
The matching energy of parallax value, obtains the parallax value of each pixel in described first image;The composite module, for by described
The parallax value of each pixel in one image, which is combined, obtains the disparity map.
With reference to second of possible implementation of the third aspect, in the third possible implementation of the third aspect
In, the computing module, specifically for each candidate disparity values in candidate's parallax set according to the pixel p, is utilized
Formula
The pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,
di);Wherein, the value of the first fitting parameter a and the second fitting parameter b value is make described to match ENERGY Ed(p,di) for most
Corresponding value during small value;W (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);The first block of pixels ΩpTable
Show the block of pixels that the pixel p is included in described first image;The pixel q is adjacent with the pixel p to belong to institute
State the first block of pixels ΩpIn pixel;The I1(q) pixel q pixel value is represented;The I2(q-di) represent q pairs of the pixel
Pixel q-d in second image answerediPixel value;The wc(p,q,di) represent pixel weight value;The ws(p,q,
di) represent distance weighting value;The wd(p,q,di) represent parallax weighted value.
With reference to the third possible implementation of the third aspect, in the 4th kind of possible implementation of the third aspect
In, the pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp[-β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
Wherein, the I1(p) pixel value of the pixel p is represented;The I2(p-di) represent the corresponding institute of the pixel p
State the pixel p-d in the second imageiPixel value;The first weight coefficient β1, the second weight coefficient β2, the described 3rd
Weight coefficient β3And the 4th weight coefficient β4It is value set in advance.
With reference to second of possible implementation of the third aspect, in the 5th kind of possible implementation of the third aspect
In, the computing module, specifically for each candidate disparity values in candidate's parallax set according to the pixel p, is utilized
Formula
The pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,
di);
Wherein, w (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
The pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp[-β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain
;
The I '1(p)=I1(p)cosθ-I2(p-di)sinθ;
The I '2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
The I '1(q)=I1(q)cosθ-I2(q-di)sinθ;
The I '2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
The adjustment angle θ is to preset the value more than 0 ° and less than 90 °.
With reference to second to the 5th any possible implementation of the third aspect, the 6th kind in the third aspect is possible
In implementation, the determining module, specifically for according to formula
Obtain each candidate disparity values d that will make in the pixel p candidate disparity values setiCandidate energies E (di) be
During minimum value, the candidate disparity values of each pixel are defined as the parallax value of each pixel in corresponding described first image;
Wherein, the I represents described first image;The second block of pixels NpRepresent to include the pixel p in described first image
One block of pixels;The Vp,q(di,dj)=λ×min(|di-dj|,Vmax);The djPixel q j-th candidates parallax value is represented,
j=1,……,m;The sum of candidate disparity values in the candidate disparity values set that the m is pixel q;The smoothing factor λ is advance
The value of setting;The difference maximum V of the adjacent pixel parallaxmaxIt is value set in advance.
With reference to the third aspect or first to the 6th any possible implementation of the third aspect, the of the third aspect
In seven kinds of possible implementations, the color interpolation unit, specifically for utilizing formula successively
And
Try to achieve the red color component value I of each pixel in the high dynamic range imagesred(e), green component values Igreen(e)
And blue color component value Iblue(e);Wherein described e represents the pixel e in the high dynamic range images;It is describedIgrey(e) picture of the pixel corresponding with pixel e in the HDR gray-scale map is represented
Element value,The pixel value of the pixel corresponding with the pixel e in second gray level image is represented,Represent
The pixel value of pixel corresponding with the pixel e in the virtual view gray level image;It is describedIt is described
And it is describedIt is illustrated respectively in red color component value, the green component values of pixel corresponding with pixel e in second image
And blue color component value;It is describedIt is describedAnd it is describedIt is illustrated respectively in the virtual view and institute
State the red color component value, green component values and blue color component value of the corresponding pixels of pixel e;The color interpolation unit, it is specific to use
In red color component value, green component values and the blue color component value of each pixel in the high dynamic range images, obtain
The pixel value of each pixel in the high dynamic range images;The color interpolation unit, specifically for by the high dynamic
The pixel value combination of each pixel in range image obtains the high dynamic range images.
With reference to the first to seven any possible implementation of the third aspect, in the 8th kind of possible reality of the third aspect
In existing mode, described hole pixel processing unit, specifically in second image, determining at least two second pictures
Element;Second pixel refers to pixel value identical pixel;Described hole pixel processing unit, specifically for according to described second
At least two second pixel in image, obtains at least two mark pixels in the virtual view;It is described virtually to regard
At least two mark pixels in figure are in the virtual view, with least two second picture in second image
The corresponding pixel of element difference;Described hole pixel processing unit, specifically for obtaining at least two marks in the virtual view
Remember the average pixel value of pixel;Described hole pixel processing unit, specifically for determining successively in the virtual view at least
Whether the difference in two mark pixels between the pixel value and the average pixel value of each mark pixel is more than described make an uproar
Glottis limit value;The noise threshold is the value set in advance for being used to judge noise;Described hole pixel processing unit, specifically
It is more than the situation of the noise threshold for the difference between the pixel value and the average pixel value of the mark pixel
Under, the mark pixel is defined as noise pixel, and the noise pixel is labeled as hole pixel.
With reference to the first to eight any possible implementation of the third aspect, in the 9th kind of possible reality of the third aspect
In existing mode, described hole pixel processing unit, specifically for according to formulaObtain described hole
Pixel r pixel value;Wherein, the I (r) represents described hole pixel r pixel value;The I2(r) represent described second
The pixel value of pixel corresponding with described hole pixel r in image;The anRepresent described hole pixel r similarity factor;Institute
The N that states n=0,1 ... ...;The N is value set in advance.
With reference to the 9th kind of possible implementation of the third aspect, in the tenth kind of possible implementation of the third aspect
In, described hole pixel processing unit, specifically for according to formula
Any of described high dynamic range images hole pixel r adjacent pixel is obtained, the phase with first pixel
Similarity factor between adjacent pixel;Wherein, the s represents the neighborhood Ψ of pixel r described in the high dynamic range imagesrIn one
Individual pixel;The I (s) represents the pixel value of the pixel s;The I2(s) represent in second image with the pixel s
The pixel value of corresponding pixel;The r-s represents the distance between the pixel r and the pixel s;The γ is to preset
, represent the weight coefficient of the distance between the pixel r and pixel s.
With reference to the 9th kind of possible implementation of the third aspect, in a kind of the tenth possible implementation of the third aspect
In, described hole pixel processing unit, specifically for according to formula
Any of described high dynamic range images hole pixel r adjacent pixel is obtained, the phase with first pixel
Similarity factor between adjacent pixel;Wherein, the first proportionality coefficient ρ1With the second proportionality coefficient ρ2It is value set in advance;The s is represented
The neighborhood Φ of pixel r described in the high dynamic range imagesrIn a pixel;The A represents the HDR figure
Picture;The a 'nRepresent the similarity factor obtained when first time calculating the pixel value of hole pixel.
With reference to the 9th kind of possible implementation of the third aspect, in the 12nd kind of possible implementation of the third aspect
In, described hole pixel processing unit, specifically for determining whether described hole pixel r has the first hole pixel;Described first
Hole pixel is the hole pixel that pixel value has been obtained in described hole pixel r adjacent holes pixel;At described hole pixel
Unit is managed, specifically in the case of it is determined that there is the first hole pixel, by the similarity factor of the first hole pixel
It is used as described hole pixel r similarity factor.
Fourth aspect, the embodiments of the invention provide a kind of equipment, including:Acquiring unit, for obtain the first image with
Second image;Described first image shoots and obtained simultaneously with second image to same object;The acquiring unit, also
For the candidate disparity values set for each pixel for obtaining described first image;Wherein, included in the candidate disparity values set
At least two candidate disparity values;Computing unit, for each pixel according to described first image, every with described first image
The candidate disparity values set of each pixel of pixel and described first image in corresponding second image of individual pixel, is obtained
To the matching ENERGY E of each candidate disparity values in the candidate disparity values set of each pixel of described first imaged(p,
di);Wherein, p represents pixel p, is the pixel of described first image corresponding with the candidate disparity values set;The diRepresent
I-th of candidate disparity values of pixel p, i=1 ... ..., k;The k is candidate disparity values in the candidate disparity values set of pixel p
Sum;The determining unit, for each time in the candidate disparity values set according to each pixel of described first image
Select the matching ENERGY E of parallax valued(p,di), obtain the parallax value of each pixel in described first image;Processing unit, for inciting somebody to action
The parallax value of each pixel in described first image is combined the acquisition disparity map.
In the first possible realization of fourth aspect, the computing unit, specifically for the time according to the pixel p
Each candidate disparity values in parallax set are selected, formula is utilized
The pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,
di);Wherein, the value of the first fitting parameter a and the second fitting parameter b value is make described to match ENERGY Ed(p,di) for most
Corresponding value during small value;W (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);The first block of pixels ΩpTable
Show the block of pixels that the pixel p is included in described first image;The pixel q is adjacent with the pixel p to belong to institute
State the first block of pixels ΩpIn pixel;The I1(q) pixel q pixel value is represented;The I2(q-di) represent q pairs of the pixel
Pixel q-d in second image answerediPixel value;The wc(p,q,di) represent pixel weight value;The ws(p,q,
di) represent distance weighting value;The wd(p,q,di) represent parallax weighted value.
With reference to the first possible implementation of fourth aspect, in second of possible implementation of fourth aspect
In, the pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp[-β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
Wherein, the I1(p) pixel value of the pixel p is represented;The I2(p-di) represent the corresponding institute of the pixel p
State the pixel p-d in the second imageiPixel value;The first weight coefficient β1, the second weight coefficient β2, the described 3rd
Weight coefficient β3And the 4th weight coefficient β4It is value set in advance.
With reference to fourth aspect, in the third possible implementation of fourth aspect, the computing unit, specifically for
According to each candidate disparity values in candidate's parallax set of the pixel p, formula is utilized
The pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,
di);
Wherein, w (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
The pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp[-β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain
;
The I '1(p)=I1(p)cosθ-I2(p-di)sinθ;
The I '2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
The I '1(q)=I1(q)cosθ-I2(q-di)sinθ;
The I '2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
The adjustment angle θ is to preset the value more than 0 ° and less than 90 °.
With reference to fourth aspect or the first to three any possible implementation of fourth aspect, at the 4th kind of fourth aspect
In possible implementation, the determining unit, specifically for according to formula
Obtain each candidate disparity values d that will make in the pixel p candidate disparity values setiCandidate energies E (di) be
During minimum value, the candidate disparity values of each pixel are defined as the parallax value of each pixel in corresponding described first image;
Wherein, the I represents described first image;The second block of pixels NpRepresent to include the pixel p in described first image
One block of pixels;The Vp,q(di,dj)=λ×min(|di-dj|,Vmax);The djPixel q j-th candidates parallax value is represented,
j=1,……,m;The sum of candidate disparity values in the candidate disparity values set that the m is pixel q;The smoothing factor λ is advance
The value of setting;The difference maximum V of the adjacent pixel parallaxmaxIt is value set in advance.
A kind of method and device of high dynamic range images synthesis provided in an embodiment of the present invention, obtains exposure different
First image and the second image, then carry out binocular solid to described first image and second image and match, obtain parallax
Figure, then according to the disparity map and described first image, synthesis has the virtual view of same view angle with second image, after
And the second gray level image is obtained according to second image, and virtual view gray level image is obtained according to the virtual view, and
According to second gray level image and the virtual view gray level image, by HDR composition algorithm, high dynamic is obtained
Scope gray-scale map, finally according to the HDR gray-scale map, second gray level image, the virtual view gray-scale map
Picture, second image and the virtual view, obtain high dynamic range images, so, due to carrying out virtual view conjunction
Into when consider relation between adjacent pixel, so as to improve the quality of high dynamic range images.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be in embodiment or description of the prior art
The required accompanying drawing used is briefly described, it should be apparent that, drawings in the following description are only some realities of the present invention
Example is applied, for those of ordinary skill in the art, on the premise of not paying creative work, can also be according to these accompanying drawings
Obtain other accompanying drawings.
Fig. 1 is a kind of schematic flow sheet of the method for high dynamic range images synthesis provided in an embodiment of the present invention;
Fig. 2 is a kind of mapping curve schematic diagram provided in an embodiment of the present invention;
Fig. 3 is a kind of coordinate system rotation schematic diagram provided in an embodiment of the present invention;
Fig. 4 is a kind of error rate schematic diagram of different Stereo Matching Algorithms provided in an embodiment of the present invention;
The schematic flow sheet for the method that Fig. 5 synthesizes for another high dynamic range images provided in an embodiment of the present invention;
Fig. 6 is a kind of schematic diagram for determining noise pixel provided in an embodiment of the present invention;
Fig. 7 is a kind of schematic flow sheet for the method for synthesizing disparity map provided in an embodiment of the present invention;
Fig. 8 is a kind of functional schematic of high dynamic range images synthesis device provided in an embodiment of the present invention;
Fig. 9 is the functional schematic of the parallax processing unit of the high dynamic range images synthesis device shown in Fig. 8;
Figure 10 is the functional schematic of another high dynamic range images synthesis device provided in an embodiment of the present invention;
Figure 11 is a kind of functional schematic of equipment provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on
Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made
Embodiment, belongs to the scope of protection of the invention.
The embodiments of the invention provide a kind of method of high dynamic range images synthesis, as shown in figure 1, including:
101st, the first image and the second image are obtained.
Wherein, the first image is shot and obtained simultaneously from the second image using different exposures to same object.
It should be noted that having overlapping region between the first image and the second image.
It should be noted that the first image and the second image are the images after overcorrection, the first image and the second image
Between only horizontal direction or vertical direction displacement.
It should be noted that can be the first image for the size of the exposure between the first image and the second image
Exposure or the exposure of the second image that exposure is more than the second image are more than the exposure of the first image.It is right
In the first image and the second image by the specific size of exposure, the present invention is without limitation.
102nd, carry out binocular solid to described first image and second image to match, obtain disparity map.
It should be noted that binocular solid matching is exactly from the image of two same objects of view, to match pair
The pixel answered, so as to calculate parallax and obtain the process of object dimensional information.
Specifically, carrying out binocular solid to the first image and the second image matches the method for obtaining disparity map, can be with now
There are the method for any disparity map for obtaining two images in technology, such as WSAD(Weighted Sum of Absolute
Differences, weighting absolute difference and algorithm)、ANCC(Adaptive Normalized Cross-Correlation, from
Adapt to normalized crosscorrelation algorithm)Deng can also be method proposed by the present invention.
Shown in Binocular Stereo Matching Algorithm proposed by the present invention is specific as follows, including:
S1, each pixel of the first image of acquisition candidate disparity values set.
Wherein, at least two candidate disparity values are included in candidate disparity values set.
It should be noted that the depth in candidate disparity values corresponding three-dimensional space.Because depth has certain scope, therefore
Candidate disparity values also have certain scope.Each value in this scope is parallax candidate value, and these candidate disparity values are total to
With the candidate disparity values set of one pixel of composition.
It should be noted that the candidate disparity values in the first image in the candidate disparity values set of each pixel can phase
Together, it can also differ.The present invention is without limitation.
S2, each pixel according to the first image, the pixel in the second image corresponding with each pixel of the first image,
And first image each pixel candidate disparity values set, in the candidate disparity values set for each pixel for obtaining the first image
Each candidate disparity values matching ENERGY Ed(p,di)。
Wherein, p represents pixel p, is the pixel of the first image corresponding with candidate disparity values set.diRepresent pixel p
I-th of candidate disparity values, i=1 ... ..., k.K for pixel p candidate disparity values set in candidate disparity values sum.
It should be noted that due to including at least two candidate disparity values in the candidate disparity values set of each pixel,
So, k >=2.
Further, the present invention proposes two kinds and calculates matching ENERGY Ed(p,di) method, it is as follows:
First method:According to each candidate disparity values in candidate's parallax set of pixel p, formula is utilized
Pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,di)。
Wherein, the first fitting parameter a value is to make to match ENERGY E with the second fitting parameter b valued(p,di) it is minimum value
When corresponding value.w(p,q,di)=wc(p,q,di)ws(p,q,di)wd(p,q,di).First block of pixels ΩpRepresent in the first image
A block of pixels comprising pixel p.Pixel q is adjacent with pixel p to belong to the first block of pixels ΩpIn pixel.I1(q) represent
Pixel q pixel value.I2(q-di) represent pixel q-d in corresponding second image of pixel qiPixel value.wc(p,q,di) table
Show pixel weight value;ws(p,q,di) represent distance weighting value;wd(p,q,di) represent parallax weighted value.
Further, pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
Distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp[-β2×(p-q)2] obtain;
Parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
Wherein, I1(p) pixel value of pixel p is represented;I2(p-di) represent pixel p in corresponding second image of pixel p-
diPixel value;First weight coefficient β1, the second weight coefficient β2, the 3rd weight coefficient β3And the 4th weight coefficient β4It is to set in advance
Fixed value.
It should be noted that by pixel weight value wc(p,q,di), distance weighting value ws(p,q,di) and parallax weighted value
wd(p,q,di) calculation formula substitute into, obtain w (p, q, di)=exp[-(β2+β3)×(p-q)2-(β1+β4)×|I1(p)-I1
(q)|×|I2(p-di)-I2(q-di)|].Can be rule of thumb by β2+β3Value be set to 0.040, β1+β4Value be set to 0.033.
It should be noted that the first block of pixels ΩpRepresent to include a block of pixels of pixel p in the first image.First picture
Plain block can be 3 neighborhoods with pixel p, can also be 4 neighborhoods with pixel p, the first block of pixels can be using pixel p in
The heart, can not also be centered on pixel p, and the specific size and pixel p for the first block of pixels are specific in the first block of pixels
Position, the present invention is not limited.
It should be noted that the region that the first block of pixels is included is bigger, that is, pixel q value is more, calculate
As a result differ smaller with actual result.
It should be noted that after two width pictures have been taken, the pass in two images between the pixel value of corresponding point
There is corresponding relation in system, it is assumed herein that represent to ask by the pixel value of any one in piece image with a smooth mapping function
Relation in corresponding another piece image between the pixel value of pixel.Linear equation I is chosen in the embodiment of the present invention1(f)=a
×I2(g)+b represents that this should set function.Wherein, I2(j) pixel value of any one pixel j in the second image, I are represented1(f)
The pixel value of pixel f corresponding with pixel j in the second image in the first image is represented, a and b are become as location of pixels changes
The fitting parameter of change.That is, for different pixels, the first fitting parameter a is also different from the second fitting parameter b
's.
It should be noted that it needs to be determined that each pixel is corresponding in the second image during due to being calculated according to above-mentioned formula
Match point in second image, for the ease of calculate can be represented with f-d in the first image any pixel f in second graph it is right
The pixel answered, now, d represent the parallax value between any pixel f response pixel relative to the second image in the first image.Need
It is noted that because the time between respective pixel corresponding in the pixel and the second image in now the first image regards
Difference is unknown, so with the actual parallax value of candidate disparity values approximate representation.
It should be noted that be the multiple candidate disparity values of each pixel placement in the first image at this, each
The candidate disparity values collection of pixel is combined into the candidate disparity values set of the pixel, then selects the candidate disparity values collection of the pixel
Candidate disparity values differ minimum candidate disparity values with actual parallax value in conjunction, are used as the parallax value of the pixel calculated.
That is, the parallax value of the pixel calculated in the embodiment of the present invention is not the actual parallax value of the pixel, but the pixel
Candidate disparity values set in approximate with an actual parallax value value.
It should be noted that being used in the embodiment of the present invention
Pixel value weight
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |],
Represent pixel p in the first image with pixel q color closer to pixel value weight is bigger;
Distance weighting
ws(p,q,di)=exp[-β2×(p-q)2],
Represent pixel p in the first image with pixel q actual range closer to distance weighting is bigger;
Parallax value weight
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |],
Represent pixel p in the first image with pixel q parallax value closer to parallax value weight is bigger.
It should be noted that as shown in Fig. 2 representing for overly bright region and being crossed in dark areas in figure, for same position
Put, a pixel in the first image a pixel the second image of correspondence, using the pixel value in the second image as transverse axis,
Pixel value in first image is mapped in figure as the longitudinal axis, and by the pixel value of co-located pixel, just obtains Fig. 3
Dot matrix in the latter half, and the mapping curve formed to dot matrix is two tangent lines, n and m.As can be seen from the figure when tangent line
When slope is larger, mapping curve is easier by influence of noise., can be by the coordinate system in Fig. 2 in order to reduce this influence
0-90 ° of rotate counterclockwise, obtains Fig. 3, and tangent line n is the straight line that a slope is tan α in Fig. 2 coordinate systems.Because coordinate system is inverse
Hour hands have rotated θ, and slopes of the tangent line n in new coordinate system is reduced to tan (α-θ).
Exemplary, if tangent line n slopes in original reference axis are excessive, such as 90 ° of α ≈.Assuming that new reference axis rotation
Turn 45 °, then tangent line n is greatly reduced in the slope of new reference axis, be changed into tan (α -45 °) ≈ 1.
Further, optimization obtains second method in first method:
According to each candidate disparity values in candidate's parallax set of pixel p, formula is utilized
Pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,di)。
Wherein, w (p, q, di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
Pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain;
Distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp[-β2×(p-q)2] obtain;
Parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain
;
I′1(p)=I1(p)cosθ-I2(p-di)sinθ;
I′2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
I′1(q)=I1(q)cosθ-I2(q-di)sinθ;
I′2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
Adjustment angle θ is to preset the value more than 0 ° and less than 90 °.
It should be noted that because adjustment angle θ is greater than 0 ° and is less than 90 ° of value, so the value of cos θ and sin θ is situated between
Between 0 to 1.
It should be noted that
Pixel value weight wc(p,q,di)=exp[-β1×|I′1(p)-I1(q)|×|I′2(p-di)-I′2(q-di) |],
Represent pixel p in the first image with pixel q color closer to pixel value weight is bigger;
Distance weighting
ws(p,q,di)=exp[-β2×(p-q)2],
Represent pixel p in the first image with pixel q actual range closer to distance weighting is bigger;
Parallax value weight
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I′1(p)-I1(q)|×|I′2(p-di)-I′2(q-di) |],
Represent pixel p in the first image with pixel q parallax value closer to parallax value weight is bigger.
S3, the matching energy according to each candidate disparity values in the candidate disparity values set of each pixel of the first image
Measure Ed(p,di), obtain the parallax value of each pixel in the first image.
Further, according to formula
Obtain each candidate disparity values d that will make in pixel p candidate disparity values setiCandidate energies E (di) it is minimum
During value, the candidate disparity values of each pixel are defined as the parallax value of each pixel in corresponding first image.
Wherein, I represents the first image;Second block of pixels NpRepresent to include a block of pixels of pixel p in the first image;
Vp,q(di,dj)=λ×min(|di-dj|,Vmax);djRepresent pixel q j-th candidates parallax value, j=1 ... ..., m;M is pixel q
Candidate disparity values set in candidate disparity values sum;Smoothing factor λ is value set in advance;The difference of adjacent pixel parallax is most
Big value VmaxIt is value set in advance.
It should be noted that candidate energies contain two aspects, first aspect is each pixel in the first image
Energy sum is matched, second aspect is each pixel smoothed energy V in the first imagep,q(di,dj) sum.
It should be noted that the second block of pixels NpCan be identical with the first block of pixels, can also be different from the first block of pixels,
The present invention is without limitation.
It should be noted that formula min (x, y) represents to take the function of smaller value in x and y.min(|di-dj|,Vmax) represent
Value is the candidate disparity values d of pixel p in the first imageiDifference and default adjacent picture between pixel q candidate disparity values
Less value in time difference maximum between element.Wherein, Vmax is a pre-defined cutoff value, it is therefore an objective to prevent smooth
Energy is excessive, so as to influence the accurate assignment of the parallax of prospect and background edge.
It should be noted that the smaller similitude represented in the first image with the second image of candidate energies is bigger.Namely
Say, what is matched in the pixel and the second image in the first image is better.
It should be noted that the parallax value of each pixel for the first image determined in this step is by the first image
Each pixel candidate disparity values combination, it is each in corresponding first image when the candidate energies obtained are minimum value
The candidate disparity values of individual pixel are the parallax value of the pixel.Selected from the candidate disparity values set of the pixel with actually regarding
The immediate candidate disparity values of difference., it is assumed that the pixel in the first image has N number of, and the candidate of each pixel regards that is
Difference has M, then the candidate energies value calculated has MNIt is individual, from MNMinimum value is selected in individual candidate energies, it is now, corresponding
The candidate disparity values of each pixel are the parallax value for each pixel finally obtained.
Further, in order to simplify calculating, existing graph cuts can be used(Image is split)Method quickly obtains first
The candidate disparity values of each pixel in image, now and need not travel through all candidate disparity values of each pixel.
S4, the parallax value of each pixel in the first image is combined acquisition disparity map.
It should be noted that disparity map be exactly each pixel in the first image parallax value it is suitable according to the arrangement of original pixel
Sequence carries out the image of arrangement acquisition.
It should be noted that being respectively 16 in the exposure ratio of the first image and the second image:1, the first image with
The exposure ratio of second image is 4:1, and in the case of the exposure identical of the first image and the second image, this is sent out
The error rate of the first method, second method and existing WSAD algorithms and the ANCC algorithms that are proposed in bright embodiment is carried out
Compare.Fig. 4 shows comparing result.It is as can be seen from the figure especially big in the exposure ratio of the first image and the second image,
For 16:When 1, WSAD and ANCC algorithms have a very big error rate, and first method that the present embodiment is proposed and second of side
The result of method is then very accurate.In the case of other two exposure ratios, the first method and that the present embodiment is proposed
Two kinds of methods are better than WSAD and ANCC result all the time.
It should be noted that, although the first method proposed in the embodiment of the present invention, second method are calculating parallax
The error rate that figure is has been substantially reduced, but the parallax value and actual parallax value phase that the pixel for still suffering from sub-fraction is calculated
Difference is larger, so these pixels are regarded into the pixel that parallax value in disparity map calculates mistake.
103rd, according to disparity map and the first image, synthesis has the virtual view of same view angle with the second image.
It should be noted that disparity map between known first image, the second image and the first image and the second image
In the case of can synthesize virtual view at any angle.In the embodiment of the present invention, for the simplicity of successive image processing method, profit
The virtual view that there is same view angle with the second image has been synthesized with disparity map with the first image.
It should be noted that can be synthesized using prior art by the first image with disparity map has phase with the second image
With the virtual view at visual angle.
Specifically, between the first image and the second image in the embodiment of the present invention only horizontally or vertically on direction
Displacement.When there was only the displacement of horizontal direction between the first image and the second image, it is possible to use formulaTry to achieve the pixel value of each pixel in virtual view.Wherein, I1(x, y) represents horizontal in the first image
Coordinate is x, and ordinate is the pixel value of y pixel, and d represents that abscissa is x in the first image, and ordinate regards for y pixel
The pixel value of pixel corresponding with the pixel in difference, virtual viewFor the pixel in the first image in the horizontal direction
Displacement is the corresponding pixel value of pixel after parallax value d is translated.When there was only vertical direction between the first image and the second image
During displacement, it is possible to use formulaTry to achieve the pixel value of each pixel in virtual view.Wherein, I1(x,
Y) represent that abscissa is x in the first image, ordinate is the pixel value of y pixel, and d represents that abscissa is x in the first image, vertical
Coordinate is the pixel value of pixel corresponding with the pixel in the parallax value of y pixel, virtual viewFor in the first image
Vertically displacement is the corresponding pixel value of pixel after parallax value d translations to the pixel.
104th, the second gray level image is obtained according to the second image, and virtual view gray level image is obtained according to virtual view.
It should be noted that the gray scale for obtaining the image according to the coloured image of an image in the prior art can be utilized
The method of image obtains the second gray level image and virtual view gray level image.
It should be noted that the ash of coloured image can be tried to achieve according to formula Grey=R*0.299+G*0.587+B*0.114
Image is spent, can also be according to formula Grey=(R+G+B)/ 3 try to achieve the gray level image of coloured image, can also be in the prior art
The other method of gray level image is obtained by coloured image, this is not restricted by the present invention.
Wherein, R represents the red component of any pixel in coloured image, and G represents the green component of the pixel, and B represents this
The blue component of pixel, Grey represents the gray scale with the pixel of the pixel correspondence position in gray-scale map.
105th, according to the second gray level image and virtual view gray level image, by HDR composition algorithm, height is obtained
Dynamic range gray-scale map.
It should be noted that the HDR composition algorithm refers to merge several pictures, high dynamic is obtained
The algorithm of range image.
It should be noted that in this step, it is possible to use existing one camera high dynamic-range image synthesis method or many
Camera high dynamic-range image synthesis method carries out merging acquisition high dynamic to the second gray level image and virtual view gray level image
Scope gray-scale map.
It should be noted that being that the red of the image synthesized, green and blueness will be needed to carry out respectively in the prior art
Processing, the embodiment of the present invention using prior art when calculating HDR gray-scale map, it is only necessary to will will need the figure of synthesis
The gray scale of picture is handled.
106th, according to HDR gray-scale map, the second gray level image, virtual view gray level image, the second image and void
Intend view, obtain high dynamic range images.
It should be noted that due to the HDR gray-scale map that is obtained in previous step do not include red, green and
The relevant information of blueness, so this step is to utilize the image of the colorful one second and virtual view, determines HDR gray scale
Red color component value, green component values and the blue color component value of each pixel in figure.
Specifically, this step includes:
T1, successively utilize formula
And
Try to achieve the red color component value I of each pixel in high dynamic range imagesred(e), green component values Igreen(e) it is and blue
Colouring component value Iblue(e)。
Wherein, e represents the pixel e in high dynamic range images;Igrey(e) represent
The pixel value of pixel corresponding with pixel e in HDR gray-scale map,Represent in the second gray level image with pixel
The pixel value of the corresponding pixels of e,Represent the pixel value of the pixel corresponding with pixel e in virtual view gray level image;AndIt is illustrated respectively in red color component value, the green of pixel corresponding with pixel e in the second image
Component value and blue color component value;AndIt is illustrated respectively in virtual view corresponding with pixel e
Red color component value, green component values and the blue color component value of pixel.
It should be noted that η (e) represents a weight coefficient, for adjusting when synthesizing high dynamic range images, utilize
The ratio of the color of second image and the color of virtual view.η (e) is by the second gray level image, virtual view gray level image
The value that relation between respective pixel on HDR gray-scale map is calculated.
It should be noted that it is that each pixel being directed in high dynamic range images calculates respective η (e) to need,
And then calculate the red color component value I of each pixelred(e), green component values IgreeAnd blue color component value I (e)blue(e)。
T2, each pixel in high dynamic range images red color component value, green component values and blue color component value,
Obtain the pixel value of each pixel in high dynamic range images;
It should be noted that according to the red color component value, green component values and blue color component value of each pixel, obtaining high dynamic
The pixel value of each pixel in state range image, red color component value, green component with a pixel known in the state of the art
Value and blue color component value, the method for obtaining the pixel value of the pixel are identical, and the present invention will not be repeated here.
T3, the combination of the pixel value of each pixel in high dynamic range images obtained into high dynamic range images.
It should be noted that high dynamic range images are as formed by multiple pixels according to permutation and combination, each picture
Element can be expressed with pixel value.
A kind of method of high dynamic range images synthesis provided in an embodiment of the present invention, obtain first exposure it is different the
One image and the second image, then carry out binocular solid to the first image and the second image and match, obtain disparity map, then according to
Disparity map and the first image, synthesis have the virtual view of same view angle with the second image, then obtain second according to the second image
Gray level image, and virtual view gray level image is obtained according to virtual view, and according to the second gray level image and virtual view gray scale
Image, by HDR composition algorithm, obtains HDR gray-scale map, finally according to HDR gray-scale map, the
Two gray level images, virtual view gray level image, the second image and virtual view, obtain high dynamic range images, so, due to
The relation between adjacent pixel is considered when carrying out virtual view synthesis, therefore improves the quality of high dynamic range images.
A kind of method of high dynamic range images synthesis provided in an embodiment of the present invention, as shown in figure 5, including:
501st, the first image and the second image are obtained.
Wherein, the first image is shot and obtained simultaneously from the second image using different exposures to same object.
Specifically, referring to step 101, it will not be repeated here.
502nd, according to the first image and the second image, by Binocular Stereo Matching Algorithm, disparity map is obtained.
Specifically, referring to step 102, it will not be repeated here.
It should be noted that the opportunity that the pixel of occlusion area is labeled as into hole pixel can be in synthesis virtual view
When or synthesis virtual view after.Determine to be labeled as hole pixel according to by the pixel of occlusion area during noise pixel
Opportunity it is different, perform different steps.If the pixel of occlusion area is labeled as into hole picture while virtual view is synthesized
Element, then perform step 503a-504a and step 505-509;If will be hidden when determining noise pixel after synthesizing virtual view
The pixel for keeping off region is labeled as hole pixel, then performs step 503b-504b and step 505-509.
503a, according to disparity map and the first image, synthesis has the virtual view of same view angle with the second image, and will be virtual
The pixel of occlusion area in view is labeled as hole pixel.
Wherein, the occlusion area is that described first image is different from the angle that second image is shot to same object
The region of generation.
It should be noted that according to disparity map and the first image, synthesis has the virtual view of same view angle with the second image
It is identical with step 103, it will not be repeated here.
It should be noted that because the first image is different from the visual angle of the second image, so being mapped to by the first image
When having the virtual view of same view angle with the second image, it is impossible to which the pixel in the first image is corresponded into the picture to the second image
On element, these do not have the area maps of respective pixel to form occlusion area into virtual image.
Can be by noise in virtual view it should be noted that occlusion area to be labeled as to the method for hole pixel
The pixel value of the pixel of region correspondence position is all set to a fixed number, and such as 1 or 0;It can also be with virtually regarding
Figure size identical image, 0, the picture of the corresponding position in de-occlusion region are set to by the pixel value of the corresponding position of occlusion area
Plain value is set to 1;The other method of pixel can also be marked in the prior art, and the present invention is without limitation.
503b, according to disparity map and the first image, synthesis has the virtual view of same view angle with the second image.
Specifically, referring to step 103, it will not be repeated here.
504a, by the noise pixel in virtual view be labeled as hole pixel.
Wherein, the noise pixel is that the wrong pixel of parallax value calculating is produced in the disparity map.
It should be noted that because the parallax value in one pixel of calculating is that one is selected from candidate disparity values set
Candidate disparity values, as the parallax value of the pixel, it is possible that the parallax value calculated can have certain error, when certain pixel
When error exceedes certain limit, the pixel is just defined as the pixel that parallax value in the disparity map calculates mistake by us.Then
When according to disparity map and the first image, synthesizing and the second image has the virtual view of same view angle, due in the disparity map
There is parallax value to calculate the picture of mistake, so noise can be produced in the virtual view of synthesis, herein by the corresponding pixel of these noises
It is defined as noise pixel.
It should be noted that the pixel of pixel corresponding in the pixel value and virtual view of pixel in the second image
Corresponding rule is substantially existed between value.Such as, when certain pixel in the second image pixel value the second image all pixels
Pixel value in it is smaller when, then picture of the pixel value of corresponding pixel in all pixels of virtual image in virtual image
It is also smaller in element value;When the pixel value of certain pixel in the second image is larger in the pixel value of all pixels of the second image,
So the pixel value of pixel corresponding in virtual image is also larger in the pixel value of all pixels of virtual image.This hair
Bright embodiment is exactly, using this rule, the pixel for not meeting this rule to be labeled as into noise pixel.
It is exemplary, as shown in fig. 6, contain noise pixel in virtual view, for same position, in virtual view
A pixel the second image of correspondence in a pixel, using the pixel value in the second image as transverse axis, in virtual view
Pixel value is mapped in figure as the longitudinal axis, and by the pixel value of pixel co-located in beautiful figure, just obtains seat on the right of Fig. 6
Dot matrix in parameter.Therefrom it is observed that most point forms a smooth incremental curve, this curve, on a small quantity
Point from mapping curve farther out, these point be noise.In our algorithm, we estimate first with all points
Mapping curve, then calculate each point to mapping curve distance, if in larger distance, by the point in virtual view it is right
The pixel answered is defined as noise pixel.
Specifically, the method for selected noise pixel and mark noise pixel refers to following steps:
Q1, in the second image, determine at least two second pixels.
Wherein, the second pixel refers to pixel value identical pixel.
Specifically, the pixel with same pixel value is divided into one group by all pixels of the second image according to pixel value,
It is divided into all pixels in same group and is called the second pixel.
It should be noted that when being unique in all pixels of the pixel value in the second image of a certain pixel, also
It is to say, in the second image during pixel value identical pixel not with the pixel, it is not processed.
Q2, at least two second pixels in the second image, obtain at least two mark pixels in virtual view.
Wherein, at least two mark pixels in virtual view are in virtual view, with least two in the second image
Individual second pixel distinguishes corresponding pixel.
Specifically, sequentially finding the corresponding pixel of pixel in the second image with same pixel value in virtual view.
Q3, at least two mark pixels obtained in virtual view average pixel value.
Specifically, obtain at least two pixel values for marking each pixel in pixels first, then by year at least two mark
Remember pixel in each pixel pixel value summation, and divided by mark pixel number, try to achieve being averaged at least two mark pixels
Pixel value.
At least two in Q4, successively determination virtual view mark the pixel value of the mark pixel of each in pixels and average
Whether the difference between pixel value is more than noise threshold.
If it should be noted that the difference between the corresponding average pixel value of the pixel value of a mark pixel is more than
During noise threshold set in advance, then the pixel is defined as noise pixel;If the pixel value of a mark pixel is right with it
When difference between the average pixel value answered is not more than noise threshold set in advance, it is determined that the pixel is not noise picture
Element.
If the difference between Q5, the pixel value of mark pixel and average pixel value is more than noise threshold, picture will be marked
Element is defined as noise pixel, and noise pixel is labeled as into hole pixel.
It should be noted that the method for mark occlusion area can be with identical with the method for mark noise pixel, can also not
Together, the invention is not limited in this regard.
504b, by the noise pixel and occlusion area in virtual view be labeled as hole pixel.
Specifically, the method for mark occlusion area, the method for referring to mark occlusion area in step 503a, herein no longer
Repeat.
Specifically, the method for determining and marking noise pixel, refers to determine in step 504a and marks noise pixel
Method, will not be repeated here.
505th, the second gray level image is obtained according to the second image, and marked according to the virtual view of mark hole pixel
Remember the virtual view gray level image of hole pixel.
Specifically, for the processing method of non-hole pixel, refer in step 104 to obtain the second ash according to the second image
Image is spent, and virtual view gray level image is obtained according to virtual view, be will not be repeated here.
It should be noted that for the hole pixel in the virtual view of mark hole pixel, in virtual view gray scale
Corresponding pixel is directly labeled as hole pixel in image.
506th, according to the second gray level image and the virtual view gray level image of mark hole pixel, HDR is passed through
Composition algorithm, obtains marking the HDR gray-scale map of hole pixel.
Specifically, for the processing method of non-hole pixel, refer in step 105 according to the second gray level image and virtual
View gray level image, by HDR composition algorithm, obtains HDR gray-scale map, will not be repeated here.
It should be noted that for the hole pixel in the virtual view gray level image of mark hole pixel, it is dynamic in height
Corresponding pixel is directly labeled as hole pixel in state scope gray-scale map.
507th, according to the HDR gray-scale map of mark hole pixel, the second gray level image, mark hole pixel
Virtual view gray level image, the second image and the virtual view for marking hole pixel, obtain marking hole pixel
High dynamic range images.
Specifically, for the processing method of non-hole pixel, refer in step 106 according to HDR gray-scale map,
Second gray level image, virtual view gray level image, the second image and virtual view, obtain high dynamic range images, herein not
Repeat again.
It should be noted that because the hole pixel in the HDR gray-scale map of mark hole pixel is basis
Mark what the hole pixel in the virtual view gray level image of hole pixel was obtained, while marking the virtual of hole pixel to regard
Hole pixel in figure gray level image is obtained according to the virtual view of mark hole pixel, so mark hole pixel
HDR gray-scale map in hole pixel, mark hole pixel virtual view gray level image in hole pixel with
The position for marking the hole pixel in the virtual view of hole pixel is identical.
It should be noted that due to marking the HDR gray-scale map of hole pixel, marking the void of hole pixel
It is identical to intend view gray level image and the position of the hole pixel in the virtual view of mark hole pixel, it is possible to selected
The arbitrary image in this three width image is selected as standard, in high dynamic range images directly by with the hole pixel in the image
Corresponding pixel is labeled as hole pixel.
508th, in the second image, it is determined that each hole pixel of high dynamic range images of mark hole pixel is corresponding
First pixel.
Specifically, for the hole pixel in the high dynamic range images of mark hole pixel, it is straight in the second image
Connect and corresponding pixel is labeled as hole pixel.
It should be noted that each hole pixel is in the second image in the high dynamic range images of mark hole pixel
In have the first corresponding pixel.
509th, the adjacent pixel of each hole pixel in high dynamic range images is obtained, between the adjacent pixel of the first pixel
Similarity factor, and according to likeness coefficient and the first pixel, obtain at least one hole pixel in high dynamic range images
In each hole pixel pixel value.
It should be noted that utilizing the adjacent pixel and the second figure of high dynamic range images Hole pixel in the present embodiment
The similarity relation between the adjacent pixel of corresponding first pixel as in, as similar between the hole pixel and its first pixel
Relation, then using the pixel value of this similarity relation and the first pixel, the final pixel value for obtaining hole pixel.
It should be noted that similarity relation can specifically be represented with similarity factor.
Further, for obtaining any of high dynamic range images hole pixel r adjacent pixel, with the first pixel
Adjacent pixel between similarity factor, can have following three kinds of methods.
First method:
According to formula
Any of high dynamic range images hole pixel r adjacent pixel is obtained, between the adjacent pixel of the first pixel
Similarity factor.
Wherein, s represents the neighborhood Ψ of pixel r in high dynamic range imagesrIn a pixel;I (s) represents pixel s's
Pixel value;I2(s) pixel value of the pixel corresponding with pixel s in the second image is represented;R-s is represented between pixel r and pixel s
Distance;γ is set in advance, represents the weight coefficient of the distance between pixel r and pixel s.
It should be noted that pixel r neighborhood ΨrIt can be the region centered on pixel r, may not be with pixel
Region centered on r.For neighborhood ΨrWith pixel r physical relationship, the present invention is not limited.
It should be noted that formula x=arg minF (x), the value for representing x is corresponding x when making the F (x) take minimum value
Value.
It should be noted that now, for each hole pixel in high dynamic range images, being required for calculating once
Similarity factor.That is, being different from for the similarity factor of each hole pixel in high dynamic range images.
Second method:
According to formula
Any of high dynamic range images hole pixel r adjacent pixel is obtained, between the adjacent pixel of the first pixel
Similarity factor.
Wherein, the first proportionality coefficient ρ1With the second proportionality coefficient ρ2It is value set in advance;S represents high dynamic range images
Middle pixel r neighborhood ΦrIn a pixel;A represents high dynamic range images;a′nRepresent to calculate hole pixel in first time
The similarity factor obtained during pixel value.
It should be noted that block of pixels ΦrIt is than block of pixels ΨrA smaller region.
It should be noted that pixel r neighborhood ΦrIt can be the region centered on pixel r, may not be with pixel
Region centered on r.For neighborhood ΦrWith pixel r physical relationship, the present invention is not limited.
It should be noted that a 'nIt is, when calculating first hole pixel, to combine each in high dynamic range images
The value that the pixel value of pixel is determined, in order to simplify calculating, a ' that will can be determined when calculating first hole pixelnCarry out
Storage, can directly utilize the value during pixel value of hole pixel after a computation.
It should be noted that can be above-mentioned formula
Derivation, variable is AN=[a0,a1,.……,aN], other parameters are integrated, it is possible to obtain (C1+C2) *
Wherein, C1, B1 is related to the coefficient of formula front half section, therefore related to pixel s by AN=(B1+B2);C2, B2 and formula second half section
It is related.But, the coefficient and pixel s in the formula second half section are not related, therefore C2 is uncorrelated to B2 with p.Calculating different p
When, C2, B2 is the same, it is not necessary to computed repeatedly, a ' that thus will be determined when calculating first hole pixelnAfter
Calculating in can be multiplexed, it is not necessary to recalculate.
It should be noted that the first proportionality coefficient ρ1It is greater than the second proportionality coefficient ρ2Value.For example, can be by the first ratio
Example coefficient ρ1Value be set as 1, and the second proportionality coefficient ρ2Learn and be set as 0.001.
The third method:
First, determine whether hole pixel r has the first hole pixel.
Wherein, the first hole pixel is the hole pixel that pixel value has been obtained in hole pixel r adjacent holes pixel.
Secondly, however, it is determined that have the first hole pixel, then using the similarity factor of the first hole pixel as hole pixel r phase
Like coefficient.
It should be noted that the third method is to utilize the hole picture that pixel value has been calculated around a hole pixel
The similarity factor of element, as the similarity factor of the hole pixel, to simplify the step of calculating similarity factor.
It should be noted that it is high dynamic that first method or second method can be combined to calculating with the third method
The similarity factor of each hole pixel in state range image.
Further, for any hole pixel r at least one hole pixel in high dynamic range images
Similarity factor and the first pixel, obtaining hole pixel r pixel value includes:According to formulaObtain hole
Hole pixel r pixel value.
Wherein, I (r) represents hole pixel r pixel value;I2(r) represent corresponding with hole pixel r in the second image
The pixel value of pixel;anRepresent hole pixel r similarity factor;N=0,1 ... ... N;N is value set in advance.
It should be noted that respective pixel represents there is same position in two images in two images in inventive embodiments
Pixel.
It should be noted that N value setting is bigger, the result calculated is more accurate, but computational complexity phase simultaneously
It should increase.
A kind of method of high dynamic range images synthesis provided in an embodiment of the present invention, obtain first exposure it is different the
One image and the second image, then carry out binocular solid to the first image and the second image and match, obtain disparity map, then according to
Disparity map and the first image, synthesis have the virtual view of same view angle with the second image, then obtain second according to the second image
Gray level image, and virtual view gray level image is obtained according to virtual view, and according to the second gray level image and virtual view gray scale
Image, by HDR composition algorithm, obtains HDR gray-scale map, finally according to HDR gray-scale map, the
Two gray level images, virtual view gray level image, the second image and virtual view, obtain high dynamic range images, are entirely obtaining
During taking high dynamic range images, larger noise pixel is influenceed to be labeled as hole pixel by occlusion area and on picture,
Finally by the pass between the adjacent pixel of the adjacent pixel of hole pixel pixel corresponding with hole pixel with the second image
System, estimates the relation between pixel corresponding in the hole pixel and the second image, and then obtain the pixel of hole pixel
Value.So, due to considering the relation between adjacent pixel when carrying out virtual view synthesis, and to occlusion area and noise
Pixel has done further processing, so as to improve the quality of high dynamic range images.
The embodiments of the invention provide a kind of method for synthesizing disparity map, as shown in fig. 7, comprises:
701st, the first image and the second image are obtained.
Wherein, the first image and the second image are shot simultaneously to same object obtains.
Specifically, referring to step 101, it will not be repeated here.
702nd, the candidate disparity values set of each pixel of the first image is obtained.
Wherein, at least two candidate disparity values are included in candidate disparity values set.
Specifically, referring to the S1 in step 102, it will not be repeated here.
703rd, the picture according to each pixel of the first image, with each pixel of the first image in corresponding second image
The candidate disparity values set of each pixel of element and the first image, obtains the candidate disparity values collection of each pixel of the first image
The matching ENERGY E of each candidate disparity values in conjunctiond(p,di)。
Wherein, p represents pixel p, is the pixel of the first image corresponding with candidate disparity values set;diRepresent pixel p
I-th of candidate disparity values, i=1 ... ..., k;K for pixel p candidate disparity values set in candidate disparity values sum.
It should be noted that due to including at least two candidate disparity values in the candidate disparity values set of each pixel,
So, k >=2.
Further, the present invention proposes two kinds and calculates matching ENERGY E (p, di) method, it is as follows:
First method:According to each candidate disparity values in candidate's parallax set of pixel p, formula is utilized
Pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,di)。
Wherein, the first fitting parameter a value is to make to match ENERGY E with the second fitting parameter b valued(p,di) it is minimum value
When corresponding value;w(p,q,di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);First block of pixels ΩpRepresent in the first image
A block of pixels comprising pixel p;Pixel q is adjacent with pixel p to belong to the first block of pixels ΩpIn pixel;I1(q) represent
Pixel q pixel value;I2(q-di) represent pixel q-d in corresponding second image of pixel qiPixel value;wc(p,q,di) table
Show pixel weight value;ws(p,q,di) represent distance weighting value;wd(p,q,di) represent parallax weighted value.
Further, the pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp[-β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
Wherein, the I1(p) pixel value of the pixel p is represented;The I2(p-di) represent the corresponding institute of the pixel p
State the pixel p-d in the second imageiPixel value;The first weight coefficient β1, the second weight coefficient β2, the described 3rd
Weight coefficient β3And the 4th weight coefficient β4It is value set in advance.
Specifically, referring to the first method of the S2 in step 102, it will not be repeated here.
Second method:According to each candidate disparity values in candidate's parallax set of pixel p, formula is utilized
Pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY E (p, di)。
Wherein, w (p, q, di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
Pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain;
Distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp[-β2×(p-q)2] obtain;
Parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain
;
I′1(p)=I1(p)cosθ-I2(p-di)sinθ;
I′2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
I′1(q)=I1(q)cosθ-I2(q-di)sinθ;
I′2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
Adjustment angle θ is to preset the value more than 0 ° and less than 90 °.
Specifically, referring to the second method of the S2 in step 102, it will not be repeated here.
704th, the matching of each candidate disparity values in the candidate disparity values set of each pixel of the first image
ENERGY Ed(p,di), obtain the parallax value of each pixel in the first image.
Further, according to formula
Obtain each candidate disparity values d that will make in pixel p candidate disparity values setiCandidate energies E (di) it is minimum
During value, the candidate disparity values of each pixel are defined as the parallax value of each pixel in corresponding first image.
Wherein, I represents the first image;Second block of pixels NpRepresent to include a block of pixels of pixel p in the first image;
Vp,q(di,dj)=λ×min(|di-dj|,Vmax);djRepresent pixel q j-th candidates parallax value, j=1 ... ..., m;M is pixel q
Candidate disparity values set in candidate disparity values sum;Smoothing factor λ is value set in advance;The difference of adjacent pixel parallax is most
Big value VmaxIt is value set in advance.
Specifically, referring to the S3 in step 102, it will not be repeated here.
705th, the parallax value of each pixel in the first image is combined acquisition disparity map.
Specifically, referring to the S4 in step 102, it will not be repeated here.
The embodiments of the invention provide a kind of method for synthesizing disparity map, the first image and the second image are obtained, and obtain
The candidate disparity values set of each pixel of first image, then according to each pixel of the first image, every with the first image
The candidate disparity values set of each pixel of pixel and the first image in corresponding second image of individual pixel, obtains the first figure
Matching ENERGY E (p, the d of each candidate disparity values in the candidate disparity values set of each pixel of picturei), then according to
The matching ENERGY E of each candidate disparity values in the candidate disparity values set of each pixel of one imaged(p,di), obtain the
The parallax value of each pixel in one image, is finally combined acquisition parallax by the parallax value of each pixel in the first image
Figure, so, due to when calculating the parallax value of each pixel so that the error between the parallax value and parallax value finally obtained is big
Width reduces, so as to improve the quality of high dynamic range images.
As shown in figure 8, a kind of function of its high dynamic range images synthesis device provided by the embodiment of the present invention is shown
It is intended to.With reference to shown in Fig. 8, the high dynamic range images synthesis device includes:Acquiring unit 801, parallax processing unit 802, void
Intend View synthesis unit 803, gray scale extraction unit 804, HDR integrated unit 805 and color interpolation unit 806.
Acquiring unit 801, for obtaining the first image and the second image.
Wherein, the first image is shot and obtained simultaneously from the second image using different exposures to same object.
Parallax processing unit 802, for described first image and second image obtained to the acquiring unit 801
Binocular solid matching is carried out, disparity map is obtained.
Further, as shown in figure 9, the parallax processing unit 802 includes:Acquisition module 8021, computing module 8022,
Determining module 8023, composite module 8024.
Acquisition module 8021, the candidate disparity values set of each pixel for obtaining the first image.
Wherein, at least two candidate disparity values are included in candidate disparity values set.
Computing module 8022, corresponding for each pixel according to the first image, with each pixel of the first image
The candidate disparity values set of each pixel of pixel and the first image in two images, obtains each pixel of the first image
The matching ENERGY E of each candidate disparity values in candidate disparity values setd(p,di)。
Wherein, p represents pixel p, is the pixel of the first image corresponding with candidate disparity values set.diRepresent pixel p
I-th of candidate disparity values, i=1 ... ..., k.K for pixel p candidate disparity values set in candidate disparity values sum.
Specifically, computing module 8022 obtains each time in the candidate disparity values set of each pixel of the first image
Select the matching ENERGY E of parallax valued(p,di) there are following two methods:
First method, computing module 8022, specifically for each candidate in candidate's parallax set according to pixel p
Parallax value, utilizes formula
Pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,di)。
Wherein, the value of the first fitting parameter a and the second fitting parameter b value is make described to match ENERGY Ed(p,di)
Corresponding value during for minimum value;W (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);First block of pixels
ΩpRepresent a block of pixels for including the pixel p in described first image;The pixel q is the category adjacent with the pixel p
In the first block of pixels ΩpIn pixel;The I1(q) pixel q pixel value is represented;The I2(q-di) represent the picture
Pixel q-d in corresponding second image of plain qiPixel value;The wc(p,q,di) represent pixel weight value;The ws
(p,q,di) represent distance weighting value;The wd(p,q,di) represent parallax weighted value.
Further, the pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp[-β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
Wherein, the I1(p) pixel value of the pixel p is represented;The I2(p-di) represent the corresponding institute of the pixel p
State the pixel p-d in the second imageiPixel value;The first weight coefficient β1, the second weight coefficient β2, the described 3rd
Weight coefficient β3And the 4th weight coefficient β4It is value set in advance.
Second method, computing module 8022, specifically for each candidate in candidate's parallax set according to pixel p
Parallax value, utilizes formula
Pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,di)。
Wherein, w (p, q, di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
Pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain;
Distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp[-β2×(p-q)2] obtain;
Parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I′1(p)-I′1(q)|×|′2(p-di)-I′2(q-di) |] obtain
;
I′1(p)=I1(p)cosθ-I2(p-di)sinθ;
I′2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
I′1(q)=I1(q)cosθ-I2(q-di)sinθ;
I′2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
Adjustment angle θ is to preset the value more than 0 ° and less than 90 °.
Determining module 8023, for each candidate in the candidate disparity values set according to each pixel of the first image
The matching ENERGY E of parallax valued(p,di), obtain the parallax value of each pixel in the first image.
Specifically, determining module 8023, specifically for according to formula
Obtain each candidate disparity values d that will make in pixel p candidate disparity values setiCandidate energies E (di) it is minimum
During value, the candidate disparity values of each pixel are defined as the parallax value of each pixel in corresponding first image.
Wherein, I represents the first image;Second block of pixels NpRepresent to include a block of pixels of pixel p in the first image;
Vp,q(di,dj)=λ×min(|di-dj|,Vmax);djRepresent pixel q j-th candidates parallax value, j=1 ... ..., m;M is pixel q
Candidate disparity values set in candidate disparity values sum;Smoothing factor λ is value set in advance;The difference of adjacent pixel parallax is most
Big value VmaxIt is value set in advance.
Composite module 8024, disparity map is obtained for the parallax value of each pixel in the first image to be combined.
Virtual view synthesis unit 803, the disparity map obtained for the parallax processing unit 802 is obtained with described
The described first image that unit 801 is obtained, synthesis has the virtual view of same view angle with the second image.
Gray scale extraction unit 804, the second image for being obtained according to the acquiring unit 801 obtains the second gray-scale map
Picture, and the virtual view synthesized according to the virtual view unit 803 obtains virtual view gray level image.
Further, gray scale extraction unit 804, is marked specifically for being obtained according to the virtual view of mark hole pixel
The virtual view gray level image of hole pixel.
HDR integrated unit 805, for the second gray level image for being obtained according to the gray scale extraction unit 804 and
Virtual view gray level image, by HDR composition algorithm, obtains HDR gray-scale map.
Further, HDR integrated unit 805, specifically for according to the second gray level image and mark hole picture
The virtual view gray level image of element, by HDR composition algorithm, obtains marking the HDR ash of hole pixel
Degree figure.
Color interpolation unit 806, for according to HDR gray-scale map, the second gray level image, virtual view gray-scale map
Picture, the second image and virtual view, obtain high dynamic range images.
Further, color interpolation unit 806, specifically for utilizing formula successively
And
Try to achieve the red color component value I of each pixel in high dynamic range imagesred(e), green component values Igreen(e) it is and blue
Colouring component value Iblue(e)。
Wherein, e represents the pixel e in high dynamic range images;Igrey(e) represent in height
The pixel value of pixel corresponding with pixel e in dynamic range gray-scale map,Represent in the second gray level image with e pairs of pixel
The pixel value for the pixel answered,Represent the pixel value of the pixel corresponding with pixel e in virtual view gray level image;AndIt is illustrated respectively in red color component value, the green of pixel corresponding with pixel e in the second image
Component value and blue color component value;AndPicture corresponding with pixel e is illustrated respectively in virtual view
Red color component value, green component values and the blue color component value of element.
Color interpolation unit 806 is red color component value specifically for each pixel in high dynamic range images, green
Colouring component value and blue color component value, obtain the pixel value of each pixel in high dynamic range images.
Color interpolation unit 806, is obtained specifically for the pixel value of each pixel in high dynamic range images is combined
High dynamic range images.
Further, color interpolation unit 806, specifically for the HDR gray scale according to mark hole pixel
Figure, the second gray level image, mark the virtual view gray level image of hole pixel, the second image and mark hole pixel
Virtual view, obtains marking the high dynamic range images of hole pixel.
Further, as shown in Figure 10, the high dynamic range images synthesis device also includes:Hole pixel processing unit
807。
Hole pixel processing unit 807, for the noise pixel in the virtual view or the occlusion area to be marked
For hole pixel.
Wherein, the occlusion area is that described first image is different from the angle that second image is shot to same object
The region of generation;The noise pixel is that the pixel of the parallax value calculating mistake in the disparity map is produced.
Specifically, hole pixel processing unit 807, specifically in the second image, determining at least two second pictures
Element.
Wherein, the second pixel refers to pixel value identical pixel.
Hole pixel processing unit 807, specifically at least two second pixels in the second image, obtains virtual
At least two mark pixels in view.
Wherein, at least two mark pixels in virtual view are in virtual view, with least two in the second image
Individual second pixel distinguishes corresponding pixel.
Hole pixel processing unit 807, the average picture specifically for obtaining at least two mark pixels in virtual view
Element value.
Hole pixel processing unit 807, it is every at least two mark pixels in virtual view specifically for determining successively
Whether the difference between the pixel value and average pixel value of one mark pixel is more than noise threshold.
Wherein, noise threshold is the value set in advance for being used to judge noise.
Hole pixel processing unit 807, specifically for the difference between the pixel value and average pixel value of mark pixel
In the case of more than noise threshold, mark pixel is defined as noise pixel, and noise pixel is labeled as hole pixel.
Hole pixel processing unit 807, is additionally operable in the second image, it is determined that the HDR of mark hole pixel
Corresponding first pixel of each hole pixel in image.
Hole pixel processing unit 807, is additionally operable to obtain the adjacent pixel of each hole pixel in high dynamic range images,
With the similarity factor between the adjacent pixel of the first pixel;And according to likeness coefficient and the first pixel, obtain HDR figure
The pixel value of each hole pixel at least one hole pixel as in.
Specifically, hole pixel processing unit 807 obtains the adjacent pixel of each hole pixel in high dynamic range images,
Similarity factor between the adjacent pixel of the first pixel can have following three kinds of methods:
First method, hole pixel processing unit 807, specifically for according to formula
Any of high dynamic range images hole pixel r adjacent pixel is obtained, between the adjacent pixel of the first pixel
Similarity factor.
Wherein, s represents the neighborhood Ψ of pixel r in high dynamic range imagesrIn a pixel;I (s) represents pixel s's
Pixel value;I2(s) pixel value of the pixel corresponding with pixel s in the second image is represented;R-s is represented between pixel r and pixel s
Distance;γ is set in advance, represents the weight coefficient of the distance between pixel r and pixel s.
Second method, hole pixel processing unit 807, specifically for according to formula
[a0,a1,.……,aN]=
Any of high dynamic range images hole pixel r adjacent pixel is obtained, between the adjacent pixel of the first pixel
Similarity factor.
Wherein, the first proportionality coefficient ρ1With the second proportionality coefficient ρ2It is value set in advance;S represents high dynamic range images
Middle pixel r neighborhood ΦrIn a pixel;A represents high dynamic range images;a′nRepresent to calculate hole pixel in first time
The similarity factor obtained during pixel value.
The third method, hole pixel processing unit 807, specifically for determining whether hole pixel r has the first hole picture
Element;In the case of it is determined that there is the first hole pixel, using the similarity factor of the first hole pixel as hole pixel r similar system
Number.
Wherein, the first hole pixel is the hole pixel that pixel value has been obtained in hole pixel r adjacent holes pixel.
Specifically, hole pixel processing unit 807, specifically for according to formulaObtain hole
Pixel r pixel value.
Wherein, I (r) represents hole pixel r pixel value;I2(r) represent corresponding with hole pixel r in the second image
The pixel value of pixel;anRepresent hole pixel r similarity factor;N=0,1 ... ... N;N is value set in advance.
A kind of high dynamic range images synthesis device provided in an embodiment of the present invention, first obtain exposure it is different first
Image and the second image, then carry out binocular solid to the first image and the second image and match, obtain disparity map, then according to regarding
Difference figure and the first image, synthesis have the virtual view of same view angle with the second image, then obtain the second ash according to the second image
Image is spent, and virtual view gray level image is obtained according to virtual view, and according to the second gray level image and virtual view gray-scale map
Picture, by HDR composition algorithm, obtains HDR gray-scale map, finally according to HDR gray-scale map, second
Gray level image, virtual view gray level image, the second image and virtual view, obtain high dynamic range images, are entirely obtaining
During high dynamic range images, larger noise pixel is influenceed to be labeled as hole pixel by occlusion area and on picture, most
Afterwards by the relation between the adjacent pixel of the adjacent pixel pixel corresponding with hole pixel with the second image of hole pixel,
The relation between pixel corresponding in the hole pixel and the second image is estimated, and then obtains the pixel value of hole pixel.
So, due to considering the relation between adjacent pixel when carrying out virtual view synthesis, and to occlusion area and noise picture
Element has done further processing, therefore improves the quality of high dynamic range images.
As shown in figure 11, the functional schematic of its a kind of equipment provided by the embodiment of the present invention.With reference to shown in Figure 11,
The equipment includes:Acquiring unit 1101, computing unit 1102, determining unit 1103 and processing unit 1104.
Acquiring unit 1101, for obtaining the first image and the second image
Wherein, the first image and the second image are shot simultaneously to same object obtains;
Acquiring unit 1101, is additionally operable to obtain the candidate disparity values set of each pixel of the first image
Wherein, at least two candidate disparity values are included in candidate disparity values set.
Computing unit 1102, corresponding for each pixel according to the first image, with each pixel of the first image
The candidate disparity values set of each pixel of pixel and the first image in two images, obtains each pixel of the first image
The matching ENERGY E of each candidate disparity values in candidate disparity values setd(p,di)。
Wherein, p represents pixel p, is the pixel of the first image corresponding with candidate disparity values set;diRepresent pixel p
I-th of candidate disparity values, i=1 ... ..., k;K for pixel p candidate disparity values set in candidate disparity values sum.
Further, computing unit 1102 obtains each in the candidate disparity values set of each pixel of the first image
Matching ENERGY E (p, the d of candidate disparity valuesi) there are following two methods:
First method, computing unit 1102, specifically for each candidate in candidate's parallax set according to pixel p
Parallax value, utilizes formula
Pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,di)。
Wherein, the first fitting parameter a value is to make to match ENERGY E with the second fitting parameter b valued(p,di) it is minimum value
When corresponding value;w(p,q,di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);First block of pixels ΩpRepresent in the first image
A block of pixels comprising pixel p;Pixel q is adjacent with pixel p to belong to the first block of pixels ΩpIn pixel;I1(q) represent
Pixel q pixel value;I2(q-di) represent pixel q-d in corresponding second image of pixel qiPixel value;wc(p,q,di) table
Show pixel weight value;ws(p,q,di) represent distance weighting value;wd(p,q,di) represent parallax weighted value.
Further, pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|] obtain;
Distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp[-β2×(p-q)2] obtain;
Parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
Wherein, I1(p) pixel value of pixel p is represented;I2(p-di) represent pixel p in corresponding second image of pixel p-
diPixel value;First weight coefficient β1, the second weight coefficient β2, the 3rd weight coefficient β3And the 4th weight coefficient β4It is to set in advance
Fixed value.
Second method, computing unit 1102, specifically for each candidate in candidate's parallax set according to pixel p
Parallax value, utilizes formula
Pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,di)。
Wherein, w (p, q, di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
Pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain;
Distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp[-β2×(p-q)2] obtain;
Parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp[-β3×(p-q)2-β4×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain
;
I′1(p)=I1(p)cosθ-I2(p-di)sinθ;
I′2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
I′1(q)=I1(q)cosθ-I2(q-di)sinθ;
I′2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
Adjustment angle θ is to preset the value more than 0 ° and less than 90 °.
Determining unit 1104, for each candidate in the candidate disparity values set according to each pixel of the first image
The matching ENERGY E of parallax valued(p,di), obtain the parallax value of each pixel in the first image.
Further, determining unit 1104, specifically for according to formula
Obtain each candidate disparity values d that will make in pixel p candidate disparity values setiCandidate energies E (di) it is minimum
During value, the candidate disparity values of each pixel are defined as the parallax value of each pixel in corresponding first image.
Wherein, I represents the first image;Second block of pixels NpRepresent to include a block of pixels of pixel p in the first image;
Vp,q(di,dj)=λ×min(|di-dj|,Vmax);djRepresent pixel q j-th candidates parallax value, j=1 ... ..., m;M is pixel q
Candidate disparity values set in candidate disparity values sum;Smoothing factor λ is value set in advance;The difference of adjacent pixel parallax is most
Big value VmaxIt is value set in advance.
Processing unit 1105, disparity map is obtained for the parallax value of each pixel in the first image to be combined.
The embodiments of the invention provide a kind of equipment, the first image and the second image are obtained, and obtain the every of the first image
The candidate disparity values set of individual pixel, it is then corresponding according to each pixel of the first image, with each pixel of the first image
The candidate disparity values set of each pixel of pixel and the first image in second image, obtains each pixel of the first image
Candidate disparity values set in each candidate disparity values matching ENERGY E (p, di), then according to each of the first image
The matching ENERGY E of each candidate disparity values in the candidate disparity values set of pixeld(p,di), obtain each in the first image
The parallax value of pixel, is finally combined acquisition disparity map by the parallax value of each pixel in the first image, so, due to
When calculating the parallax value of each pixel so that the error between the parallax value and parallax value finally obtained significantly reduces, so as to carry
The high quality of high dynamic range images.
In several embodiments provided herein, it should be understood that disclosed system, apparatus and method can be with
Realize by another way.For example, device embodiment described above is only schematical, for example, the unit
Divide, only a kind of division of logic function there can be other dividing mode when actually realizing, such as multiple units or component
Another system can be combined or be desirably integrated into, or some features can be ignored, or do not perform.It is another, it is shown or
The coupling each other discussed or direct-coupling or communication connection can be the indirect couplings of device or unit by some interfaces
Close or communicate to connect, can be electrical, machinery or other forms.
The unit illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing unit, can also
That the independent physics of unit includes, can also two or more units it is integrated in a unit.Above-mentioned integrated list
Member can both be realized in the form of hardware, it would however also be possible to employ hardware adds the form of SFU software functional unit to realize.
The above-mentioned integrated unit realized in the form of SFU software functional unit, can be stored in an embodied on computer readable and deposit
In storage media.Above-mentioned SFU software functional unit is stored in a storage medium, including some instructions are to cause a computer
Equipment(Can be personal computer, server, or network equipment etc.)Perform the portion of each embodiment methods described of the invention
Step by step.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage(Read-Only Memory, referred to as
ROM), random access memory(Random Access Memory, abbreviation RAM), magnetic disc or CD etc. are various to store
The medium of program code.
Finally it should be noted that:The above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although
The present invention is described in detail with reference to the foregoing embodiments, it will be understood by those within the art that:It still may be used
To be modified to the technical scheme described in foregoing embodiments, or equivalent substitution is carried out to which part technical characteristic;
And these modification or replace, do not make appropriate technical solution essence depart from various embodiments of the present invention technical scheme spirit and
Scope.
Claims (34)
1. a kind of method of high dynamic range images synthesis, it is characterised in that including:
Obtain the first image and the second image;Described first image is to same thing using different exposures from second image
Body shoots what is obtained simultaneously;
Binocular solid is carried out to described first image and second image to match, and obtains disparity map;
According to the disparity map and described first image, synthesis has the virtual view of same view angle with second image;
Second gray level image is obtained according to second image, and virtual view gray level image is obtained according to the virtual view;
According to second gray level image and the virtual view gray level image, by HDR composition algorithm, height is obtained
Dynamic range gray-scale map;
According to the HDR gray-scale map, second gray level image, the virtual view gray level image, second figure
Picture and the virtual view, obtain high dynamic range images.
2. according to the method described in claim 1, it is characterised in that methods described also includes:
Described according to the disparity map and described first image, synthesis has the virtual view of same view angle with second image
When, the pixel of the occlusion area in the virtual view is labeled as hole pixel;The occlusion area is described first image
The region of generations different from the angle that second image is shot to same object;Or,
Described according to the disparity map and described first image, synthesis has the virtual view of same view angle with second image
Afterwards, second gray level image obtained according to second image described, and virtual view ash is obtained according to the virtual view
Spend before image, methods described also includes:Noise pixel in the virtual view or the occlusion area are labeled as hole
Pixel;The noise pixel is that the pixel of the parallax value calculating mistake in the disparity map is produced;
It is described virtual view gray level image is obtained according to the virtual view to include:
Obtain marking the virtual view gray level image of hole pixel according to the virtual view of mark hole pixel;
It is described according to second gray level image and the virtual view gray level image, by HDR composition algorithm, obtain
Include to HDR gray-scale map:
According to second gray level image and the virtual view gray level image of the mark hole pixel, pass through HDR
Composition algorithm, obtains marking the HDR gray-scale map of hole pixel;
It is described according to the HDR gray-scale map, second gray level image, the virtual view gray level image, described
Two images and the virtual view, obtaining high dynamic range images includes:
According to HDR gray-scale map, second gray level image, the mark hole of the mark hole pixel
The virtual view of the virtual view gray level image of pixel, second image and the mark hole pixel, is marked
The high dynamic range images of hole pixel;
Described according to the HDR gray-scale map, second image and the virtual view gray level image, height is obtained
After dynamic image, methods described also includes:
In second image, each hole pixel correspondence in the high dynamic range images of the mark hole pixel is determined
The first pixel;
The adjacent pixel of each hole pixel in the high dynamic range images is obtained, between the adjacent pixel of first pixel
Similarity factor;And according to the likeness coefficient and first pixel, obtain each hole in the high dynamic range images
The pixel value of hole pixel.
3. method according to claim 2, it is characterised in that
Described that described first image and second image progress binocular solid are matched, obtaining disparity map includes:
Obtain the candidate disparity values set of each pixel of described first image;Wherein, included in the candidate disparity values set
At least two candidate disparity values;
According to each pixel of described first image, with each pixel of described first image in corresponding second image
The candidate disparity values set of each pixel of pixel and described first image, obtains the time of each pixel of described first image
Select the matching ENERGY E of each candidate disparity values in parallax value setd(p,di);Wherein, p represents pixel p, is waited with described
Select the pixel of the corresponding described first image of parallax value set;The diRepresent i-th of candidate disparity values of pixel p, i=
1 ... ..., k;The k for pixel p candidate disparity values set in candidate disparity values sum;
The matching ENERGY E of each candidate disparity values in the candidate disparity values set of each pixel of described first imaged
(p,di), obtain the parallax value of each pixel in described first image;
The parallax value of each pixel in described first image is combined and obtains the disparity map.
4. method according to claim 3, it is characterised in that
It is described according to each pixel of described first image, second image corresponding with each pixel of described first image
In pixel and described first image each pixel candidate disparity values set, obtain each pixel of described first image
Candidate disparity values set in each candidate disparity values matching ENERGY Ed(p,di) include:
According to each candidate disparity values in candidate's parallax set of the pixel p, formula is utilized
The pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,di);Its
In, the value of the first fitting parameter a and the second fitting parameter b value is make described to match ENERGY Ed(p,di) be minimum value when
Corresponding value;W (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);The first block of pixels ΩpRepresent described
A block of pixels of the pixel p is included in first image;The pixel q is adjacent with the pixel p to belong to described first
Block of pixels ΩpIn pixel;The I1(q) pixel q pixel value is represented;The I2(q-di) represent the corresponding institutes of the pixel q
State the pixel q-d in the second imageiPixel value;The wc(p,q,di) represent pixel weight value;The ws(p,q,di) represent
Distance weighting value;The wd(p,q,di) represent parallax weighted value.
5. method according to claim 4, it is characterised in that
The pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp [- β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp [- β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp [- β3×(p-q)2-β4×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
Wherein, the I1(p) pixel value of the pixel p is represented;The I2(p-di) represent the pixel p corresponding described the
Pixel p-d in two imagesiPixel value;The first weight coefficient β1, the second weight coefficient β2, the 3rd weight
Factor beta3And the 4th weight coefficient β4It is value set in advance.
6. method according to claim 3, it is characterised in that
It is described according to each pixel of described first image, second image corresponding with each pixel of described first image
In pixel and described first image each pixel candidate disparity values set, obtain each pixel of described first image
Candidate disparity values set in each candidate disparity values matching ENERGY Ed(p,di) include:
According to each candidate disparity values in candidate's parallax set of the pixel p, formula is utilized
The pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,di);
Wherein, w (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
The pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp [- β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp [- β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp [- β3×(p-q)2-β4×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain;
The I '1(p)=I1(p)cosθ-I2(p-di)sinθ;
The I '2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
The I '1(q)=I1(q)cosθ-I2(q-di)sinθ;
The I '2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
The adjustment angle θ is to preset the value more than 0 ° and less than 90 °;
The first block of pixels ΩpRepresent a block of pixels for including the pixel p in described first image;First fitting
Parameter a value and the second fitting parameter b value is make described to match ENERGY Ed(p,di) be minimum value when corresponding value;The I1
(p) pixel value of the pixel p is represented;The I1(q) pixel q pixel value is represented;The I2(p-di) represent the pixel p
Pixel p-d in corresponding second imageiPixel value;The first weight coefficient β1, the second weight coefficient β2、
The 3rd weight coefficient β3And the 4th weight coefficient β4It is value set in advance.
7. the method according to claim any one of 3-6, it is characterised in that
The matching of each candidate disparity values in the candidate disparity values set according to each pixel of described first image
ENERGY Ed(p,di), obtaining the parallax value of each pixel in described first image includes:
According to formula
Obtain each candidate disparity values d that will make in the pixel p candidate disparity values setiCandidate energies E (di) it is minimum
During value, the candidate disparity values of each pixel are defined as the parallax value of each pixel in corresponding described first image;Wherein,
The I represents described first image;The second block of pixels NpRepresent a picture for including the pixel p in described first image
Plain block;The Vp,q(di,dj)=λ × min (| di-dj|,Vmax);The djRepresent pixel q j-th candidates parallax value, j=
1,……,m;The sum of candidate disparity values in the candidate disparity values set that the m is pixel q;The smoothing factor λ is to set in advance
Fixed value;The difference maximum V of the adjacent pixel parallaxmaxIt is value set in advance.
8. the method according to claim any one of 1-6, it is characterised in that
It is described according to the HDR gray-scale map, it is second gray level image, the virtual view gray level image, described
Second image and the virtual view, obtaining high dynamic range images includes:
Formula is utilized successively
And
Try to achieve the red color component value I of each pixel in the high dynamic range imagesred(e), green component values Igreen(e) it is and blue
Colouring component value Iblue(e);Wherein, the e represents the pixel e in the high dynamic range images;It is describedIgrey(e) pixel corresponding with the pixel e in the HDR gray-scale map is represented
Pixel value,The pixel value of the pixel corresponding with the pixel e in second gray level image is represented,
Represent the pixel value of the pixel corresponding with the pixel e in the virtual view gray level image;It is describedIt is describedAnd it is describedBe illustrated respectively in the red color component value of pixel corresponding with the pixel e in second image,
Green component values and blue color component value;It is describedIt is describedAnd it is describedIt is illustrated respectively in and described virtually regards
Red color component value, green component values and the blue color component value of pixel corresponding with the pixel e in figure;
Red color component value, green component values and the blue color component value of each pixel in the high dynamic range images, are obtained
The pixel value of each pixel into the high dynamic range images;
The pixel value combination of each pixel in the high dynamic range images is obtained into the high dynamic range images.
9. the method according to claim any one of 2-6, it is characterised in that
The noise pixel by the virtual view includes labeled as hole pixel:
In second image, at least two second pixels are determined;Second pixel refers to pixel value identical pixel;
At least two second pixel in second image, obtains at least two marks in the virtual view
Pixel;At least two mark pixels in the virtual view are in the virtual view, with the institute in second image
State at least two second pixels and distinguish corresponding pixel;
Obtain the average pixel value of at least two mark pixels in the virtual view;
Determine that the pixel value of each mark pixel at least two mark pixels in the virtual view is put down with described successively
Whether the difference between equal pixel value is more than the noise threshold;
If the difference between the pixel value and the average pixel value of the mark pixel is more than the noise threshold, by institute
State mark pixel and be defined as noise pixel, and the noise pixel is labeled as hole pixel.
10. the method according to claim any one of 2-6, it is characterised in that
For the similarity factor and first pixel according to any hole pixel r in the high dynamic range images, institute is obtained
Stating hole pixel r pixel value includes:
According to formulaObtain described hole pixel r pixel value;Wherein, the I (r) represents described
Hole pixel r pixel value;The I2(r) picture of the pixel corresponding with described hole pixel r in second image is represented
Element value;The anRepresent described hole pixel r similarity factor;The n=0,1 ... ... N;The N is value set in advance.
11. method according to claim 10, it is characterised in that
Adjacent pixel for obtaining any of high dynamic range images hole pixel r, the phase with first pixel
Similarity factor between adjacent pixel includes:
According to formula
Obtain any of described high dynamic range images hole pixel r adjacent pixel and the adjacent picture of first pixel
Similarity factor between element;Wherein, the s represents the neighborhood Ψ of pixel r described in the high dynamic range imagesrIn a picture
Element;The I (s) represents the pixel value of the pixel s;The I2(s) represent corresponding with the pixel s in second image
Pixel pixel value;The r-s represents the distance between the pixel r and the pixel s;The γ is set in advance, table
Show the weight coefficient of the distance between the pixel r and the pixel s.
12. method according to claim 10, it is characterised in that
Adjacent pixel for obtaining any of high dynamic range images hole pixel r, the phase with first pixel
Similarity factor between adjacent pixel includes:
According to formula
Obtain any of described high dynamic range images hole pixel r adjacent pixel and the adjacent picture of first pixel
Similarity factor between element;Wherein, the first proportionality coefficient ρ1With the second proportionality coefficient ρ2It is value set in advance;The s represents described
The neighborhood Φ of pixel r described in high dynamic range imagesrIn a pixel;The A represents the high dynamic range images;Institute
State a 'nRepresent the similarity factor obtained when first time calculating the pixel value of hole pixel;The I (s) represents the pixel s's
Pixel value;The I2(s) pixel value of the pixel corresponding with the pixel s in second image is represented.
13. method according to claim 10, it is characterised in that
Adjacent pixel for obtaining any of high dynamic range images hole pixel r, the phase with first pixel
Similarity factor between adjacent pixel includes:
Determine whether described hole pixel r has the first hole pixel;The first hole pixel is the adjacent of described hole pixel r
The hole pixel of pixel value has been obtained in hole pixel;
If it is determined that there is the first hole pixel, then the similarity factor of the first hole pixel is regard as described hole pixel r
Similarity factor.
14. according to the method described in claim 1, it is characterised in that the synthetic method of the disparity map includes:
Obtain the first image and the second image;Described first image is to same object with second image while shooting is obtained
's;
Obtain the candidate disparity values set of each pixel of described first image;Wherein, included in the candidate disparity values set
At least two candidate disparity values;
According to each pixel of described first image, with each pixel of described first image in corresponding second image
The candidate disparity values set of each pixel of pixel and described first image, obtains the time of each pixel of described first image
Select the matching ENERGY E of each candidate disparity values in parallax value setd(p,di);Wherein, p represents pixel p, is waited with described
Select the pixel of the corresponding described first image of parallax value set;The diRepresent i-th of candidate disparity values of the pixel p, i=
1 ... ..., k;The k for the pixel p candidate disparity values set in candidate disparity values sum;
The matching ENERGY E of each candidate disparity values in the candidate disparity values set of each pixel of described first imaged
(p,di), obtain the parallax value of each pixel in described first image;
The parallax value of each pixel in described first image is combined and obtains the disparity map;
Wherein, it is described according to each pixel of described first image, corresponding with each pixel of described first image described
The candidate disparity values set of each pixel of pixel and described first image in two images, obtains the every of described first image
The matching ENERGY E of each candidate disparity values in the candidate disparity values set of individual pixeld(p,di) include:
According to each candidate disparity values in candidate's parallax set of the pixel p, formula is utilized
The pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,di);Its
In, the value of the first fitting parameter a and the second fitting parameter b value is make described to match ENERGY Ed(p,di) be minimum value when
Corresponding value;W (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);The first block of pixels ΩpRepresent described
A block of pixels of the pixel p is included in first image;The pixel q is adjacent with the pixel p to belong to described first
Block of pixels ΩpIn pixel;The I1(q) pixel q pixel value is represented;The I2(q-di) represent the corresponding institutes of the pixel q
State the pixel q-d in the second imageiPixel value;The wc(p,q,di) represent pixel weight value;The ws(p,q,di) represent
Distance weighting value;The wd(p,q,di) represent parallax weighted value.
15. method according to claim 14, it is characterised in that
The pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp [- β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp [- β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp [- β3×(p-q)2-β4×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
Wherein, the I1(p) pixel value of the pixel p is represented;The I2(p-di) represent the pixel p corresponding described the
Pixel p-d in two imagesiPixel value;The first weight coefficient β1, the second weight coefficient β2, the 3rd weight
Factor beta3And the 4th weight coefficient β4It is value set in advance.
16. method according to claim 14, it is characterised in that
It is described according to each pixel of described first image, second image corresponding with each pixel of described first image
In pixel and described first image each pixel candidate disparity values set, obtain each pixel of described first image
Candidate disparity values set in each candidate disparity values matching ENERGY Ed(p,di) include:
According to each candidate disparity values in candidate's parallax set of the pixel p, formula is utilized
The pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,di);
Wherein, w (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
The pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp [- β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp [- β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp [- β3×(p-q)2-β4×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain;
The I1' (p)=I1(p)cosθ-I2(p-di)sinθ;
The I '2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
The I '1(q)=I1(q)cosθ-I2(q-di)sinθ;
The I '2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
The adjustment angle θ is to preset the value more than 0 ° and less than 90 °;
The first block of pixels ΩpRepresent a block of pixels for including the pixel p in described first image;First fitting
Parameter a value and the second fitting parameter b value is make described to match ENERGY Ed(p,di) be minimum value when corresponding value;The I1
(p) pixel value of the pixel p is represented;The I1(q) pixel q pixel value is represented;The I2(p-di) represent the pixel p
Pixel p-d in corresponding second imageiPixel value;The first weight coefficient β1, the second weight coefficient β2、
The 3rd weight coefficient β3And the 4th weight coefficient β4It is value set in advance.
17. the method according to claim any one of 14-16, it is characterised in that
The matching of each candidate disparity values in the candidate disparity values set according to each pixel of described first image
ENERGY Ed(p,di), obtaining the parallax value of each pixel in described first image includes:
According to formula
Obtain each candidate disparity values d that will make in the pixel p candidate disparity values setiCandidate energies E (di) it is minimum
During value, the candidate disparity values of each pixel are defined as the parallax value of each pixel in corresponding described first image;Wherein,
The I represents described first image;The second block of pixels NpRepresent a picture for including the pixel p in described first image
Plain block;The Vp,q(di,dj)=λ × min (| di-dj|,Vmax);The djRepresent pixel q j-th candidates parallax value, j=
1,……,m;The sum of candidate disparity values in the candidate disparity values set that the m is pixel q;The smoothing factor λ is to set in advance
Fixed value;The difference maximum V of the adjacent pixel parallaxmaxIt is value set in advance.
18. a kind of high dynamic range images synthesis device, it is characterised in that including:
Acquiring unit, for obtaining the first image and the second image;Described first image is using different from second image
Exposure shoots what is obtained simultaneously to same object;
Parallax processing unit, described first image and second image for being obtained to the acquiring unit carry out binocular and stood
Body is matched, and obtains disparity map;
Virtual view synthesis unit, the disparity map and the acquiring unit for being obtained according to the parallax processing unit are obtained
The described first image taken, synthesis has the virtual view of same view angle with second image;
Gray scale extraction unit, second image for being obtained according to the acquiring unit obtains the second gray level image, and root
The virtual view synthesized according to the virtual view synthesis unit obtains virtual view gray level image;
HDR integrated unit, for second gray level image obtained according to the gray scale extraction unit and the void
Intend view gray level image, by HDR composition algorithm, obtain HDR gray-scale map;
Color interpolation unit, for according to the HDR gray-scale map, second gray level image, virtual view ash
Image, second image and the virtual view are spent, high dynamic range images are obtained.
19. equipment according to claim 18, it is characterised in that also include:Hole pixel processing unit:
Described hole pixel processing unit, for the noise pixel in the virtual view and/or the occlusion area to be marked
For hole pixel;The occlusion area is described first image production different from the angle that second image is shot to same object
Raw region;The noise pixel is that the pixel of the parallax value calculating mistake in the disparity map is produced;
The gray scale extraction unit, specifically for obtaining marking hole pixel according to the virtual view of mark hole pixel
Virtual view gray level image;
The HDR integrated unit, specifically for according to second gray level image and the mark hole pixel
Virtual view gray level image, by HDR composition algorithm, obtains marking the HDR gray-scale map of hole pixel;
The color interpolation unit, specifically for the HDR gray-scale map according to the mark hole pixel, described the
Two gray level images, the virtual view gray level image of the mark hole pixel, second image and the mark have
The virtual view of hole pixel, obtains marking the high dynamic range images of hole pixel;
Described hole pixel processing unit, is additionally operable in second image, determines that the height of the mark hole pixel is moved
Corresponding first pixel of each hole pixel in state range image;
Described hole pixel processing unit, is additionally operable to obtain the adjacent picture of each hole pixel in the high dynamic range images
Similarity factor between element, with the adjacent pixel of first pixel;And according to the likeness coefficient and first pixel, obtain
The pixel value of each hole pixel into the high dynamic range images.
20. equipment according to claim 19, it is characterised in that the parallax processing unit includes:Acquisition module, calculating
Module, determining module, composite module;
The acquisition module, the candidate disparity values set of each pixel for obtaining described first image;Wherein, the candidate
At least two candidate disparity values are included in parallax value set;
The computing module, for each pixel according to described first image, corresponding with each pixel of described first image
Second image in pixel and described first image each pixel candidate disparity values set, obtain described first
The matching ENERGY E of each candidate disparity values in the candidate disparity values set of each pixel of imaged(p,di);Wherein, p tables
Show pixel p, be the pixel of described first image corresponding with the candidate disparity values set;The diRepresent i-th of pixel p
Candidate disparity values, i=1 ... ..., k;The k for pixel p candidate disparity values set in candidate disparity values sum;
The determining module, for each candidate in the candidate disparity values set according to each pixel of described first image
The matching ENERGY E of parallax valued(p,di), obtain the parallax value of each pixel in described first image;
The composite module, the parallax is obtained for the parallax value of each pixel in described first image to be combined
Figure.
21. equipment according to claim 20, it is characterised in that
The computing module, specifically for each candidate disparity values in candidate's parallax set according to the pixel p, is utilized
Formula
The pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,di);Its
In, the value of the first fitting parameter a and the second fitting parameter b value is make described to match ENERGY Ed(p,di) be minimum value when
Corresponding value;W (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);The first block of pixels ΩpRepresent described
A block of pixels of the pixel p is included in first image;The pixel q is adjacent with the pixel p to belong to described first
Block of pixels ΩpIn pixel;The I1(q) pixel q pixel value is represented;The I2(q-di) represent the corresponding institutes of the pixel q
State the pixel q-d in the second imageiPixel value;The wc(p,q,di) represent pixel weight value;The ws(p,q,di) represent
Distance weighting value;The wd(p,q,di) represent parallax weighted value.
22. equipment according to claim 21, it is characterised in that
The pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp [- β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp [- β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp [- β3×(p-q)2-β4×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
Wherein, the I1(p) pixel value of the pixel p is represented;The I2(p-di) represent the pixel p corresponding described the
Pixel p-d in two imagesiPixel value;The first weight coefficient β1, the second weight coefficient β2, the 3rd weight
Factor beta3And the 4th weight coefficient β4It is value set in advance.
23. equipment according to claim 20, it is characterised in that
The computing module, specifically for each candidate disparity values in candidate's parallax set according to the pixel p, is utilized
Formula
The pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,di);
Wherein, w (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
The pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp [- β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp [- β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp [- β3×(p-q)2-β4×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain;
The I '1(p)=I1(p)cosθ-I2(p-di)sinθ;
The I '2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
The I '1(q)=I1(q)cosθ-I2(q-di)sinθ;
The I '2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
The adjustment angle θ is to preset the value more than 0 ° and less than 90 °;
The first block of pixels ΩpRepresent a block of pixels for including the pixel p in described first image;First fitting
Parameter a value and the second fitting parameter b value is make described to match ENERGY Ed(p,di) be minimum value when corresponding value;The I1
(p) pixel value of the pixel p is represented;The I1(q) pixel q pixel value is represented;The I2(p-di) represent the pixel p
Pixel p-d in corresponding second imageiPixel value;The first weight coefficient β1, the second weight coefficient β2、
The 3rd weight coefficient β3And the 4th weight coefficient β4It is value set in advance.
24. the equipment according to claim any one of 20-23, it is characterised in that
The determining module, specifically for according to formula
Obtain each candidate disparity values d that will make in the pixel p candidate disparity values setiCandidate energies E (di) it is minimum
During value, the candidate disparity values of each pixel are defined as the parallax value of each pixel in corresponding described first image;Wherein,
The I represents described first image;The second block of pixels NpRepresent a picture for including the pixel p in described first image
Plain block;The Vp,q(di,dj)=λ × min (| di-dj|,Vmax);The djRepresent pixel q j-th candidates parallax value, j=
1,……,m;The sum of candidate disparity values in the candidate disparity values set that the m is pixel q;The smoothing factor λ is to set in advance
Fixed value;The difference maximum V of the adjacent pixel parallaxmaxIt is value set in advance.
25. the equipment according to claim any one of 18-23, it is characterised in that
The color interpolation unit, specifically for utilizing formula successively
And
Try to achieve the red color component value I of each pixel in the high dynamic range imagesred(e), green component values Igreen(e) it is and blue
Colouring component value Iblue(e);Wherein, the e represents the pixel e in the high dynamic range images;It is describedIgrey(e) pixel corresponding with the pixel e in the HDR gray-scale map is represented
Pixel value,The pixel value of the pixel corresponding with the pixel e in second gray level image is represented,Table
Show the pixel value of the pixel corresponding with the pixel e in the virtual view gray level image;It is describedIt is described
And it is describedIt is illustrated respectively in red color component value, the green component values of pixel corresponding with pixel e in second image
And blue color component value;It is describedIt is describedAnd it is describedBe illustrated respectively in the virtual view with it is described
Red color component value, green component values and the blue color component value of the corresponding pixels of pixel e;
The color interpolation unit, red color component value specifically for each pixel in the high dynamic range images,
Green component values and blue color component value, obtain the pixel value of each pixel in the high dynamic range images;
The color interpolation unit, specifically for the pixel value combination of each pixel in the high dynamic range images is obtained
The high dynamic range images.
26. the equipment according to claim any one of 19-23, it is characterised in that
Described hole pixel processing unit, specifically in second image, determining at least two second pixels;It is described
Second pixel refers to pixel value identical pixel;
Described hole pixel processing unit, specifically at least two second pixel in second image, is obtained
To at least two mark pixels in the virtual view;At least two mark pixels in the virtual view are in the void
Intend in view, with the corresponding pixel of at least two second pixels difference in second image;
Described hole pixel processing unit, the average picture specifically for obtaining at least two mark pixels in the virtual view
Element value;
Described hole pixel processing unit, it is every at least two mark pixels in the virtual view specifically for determining successively
Whether the difference between the pixel value and the average pixel value of one mark pixel is more than the noise threshold;
Described hole pixel processing unit, specifically between the pixel value and the average pixel value of the mark pixel
In the case that difference is more than the noise threshold, the mark pixel is defined as noise pixel, and by the noise pixel
Labeled as hole pixel.
27. the equipment according to claim any one of 19-23, it is characterised in that
Described hole pixel processing unit, specifically for according to formulaObtain described hole pixel r's
Pixel value;Wherein, the I (r) represents described hole pixel r pixel value;The I2(r) represent in second image with
The pixel value of the corresponding pixels of described hole pixel r;The anRepresent described hole pixel r similarity factor;The n=0,
1 ... ... N;The N is value set in advance.
28. equipment according to claim 27, it is characterised in that
Described hole pixel processing unit, specifically for according to formula
Obtain any of described high dynamic range images hole pixel r adjacent pixel and the adjacent pixel of first pixel
Between similarity factor;Wherein, the s represents the neighborhood Ψ of pixel r described in the high dynamic range imagesrIn a picture
Element;The I (s) represents the pixel value of the pixel s;The I2(s) represent corresponding with the pixel s in second image
Pixel pixel value;The r-s represents the distance between the pixel r and the pixel s;The γ is set in advance, table
Show the weight coefficient of the distance between the pixel r and the pixel s.
29. equipment according to claim 27, it is characterised in that
Described hole pixel processing unit, specifically for according to formula
Obtain any of described high dynamic range images hole pixel r adjacent pixel and the adjacent picture of first pixel
Similarity factor between element;Wherein, the first proportionality coefficient ρ1With the second proportionality coefficient ρ2It is value set in advance;The s represents described
The neighborhood Φ of pixel r described in high dynamic range imagesrIn a pixel;The A represents the high dynamic range images;Institute
State a 'nRepresent the similarity factor obtained when first time calculating the pixel value of hole pixel;The I (s) represents the pixel s's
Pixel value;The I2(s) pixel value of the pixel corresponding with the pixel s in second image is represented.
30. equipment according to claim 27, it is characterised in that
Described hole pixel processing unit, specifically for determining whether described hole pixel r has the first hole pixel;Described first
Hole pixel is the hole pixel that pixel value has been obtained in described hole pixel r adjacent holes pixel;
Described hole pixel processing unit, specifically in the case of it is determined that there is the first hole pixel, by described first
The similarity factor of hole pixel as described hole pixel r similarity factor.
31. equipment according to claim 18, it is characterised in that the equipment also include computing unit, determining unit and
Processing unit, the acquiring unit, the computing unit, the determining unit and the processing unit are used to synthesize disparity map;
The acquiring unit, is additionally operable to obtain the candidate disparity values set of each pixel of described first image;Wherein, it is described to wait
Select and at least two candidate disparity values are included in parallax value set;
The computing unit, for each pixel according to described first image, corresponding with each pixel of described first image
Second image in pixel and described first image each pixel candidate disparity values set, obtain described first
The matching ENERGY E of each candidate disparity values in the candidate disparity values set of each pixel of imaged(p,di);Wherein, p tables
Show pixel p, be the pixel of described first image corresponding with the candidate disparity values set;The diRepresent the pixel p
I-th of candidate disparity values, i=1 ... ..., k;The k for the pixel p candidate disparity values set in candidate disparity values it is total
Number;
The determining unit, for each candidate in the candidate disparity values set according to each pixel of described first image
The matching ENERGY E of parallax valued(p,di), obtain the parallax value of each pixel in described first image;
The processing unit, the parallax is obtained for the parallax value of each pixel in described first image to be combined
Figure;
The computing unit, specifically for each candidate disparity values in candidate's parallax set according to the pixel p, is utilized
Formula
The pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,di);Its
In, the value of the first fitting parameter a and the second fitting parameter b value is make described to match ENERGY Ed(p,di) be minimum value when
Corresponding value;W (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);The first block of pixels ΩpRepresent described
A block of pixels of the pixel p is included in first image;The pixel q is adjacent with the pixel p to belong to described first
Block of pixels ΩpIn pixel;The I1(q) pixel q pixel value is represented;The I2(q-di) represent the corresponding institutes of the pixel q
State the pixel q-d in the second imageiPixel value;The wc(p,q,di) represent pixel weight value;The ws(p,q,di) represent
Distance weighting value;The wd(p,q,di) represent parallax weighted value.
32. equipment according to claim 31, it is characterised in that
The pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp [- β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp [- β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp [- β3×(p-q)2-β4×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di) |] obtain;
Wherein, the I1(p) pixel value of the pixel p is represented;The I2(p-di) represent the pixel p corresponding described the
Pixel p-d in two imagesiPixel value;The first weight coefficient β1, the second weight coefficient β2, the 3rd weight
Factor beta3And the 4th weight coefficient β4It is value set in advance.
33. equipment according to claim 31, it is characterised in that
The computing unit, specifically for each candidate disparity values in candidate's parallax set according to the pixel p, is utilized
Formula
The pixel p is tried to achieve for each candidate disparity values d in candidate disparity values setiMatching ENERGY Ed(p,di);
Wherein, w (p, q, the di)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
The pixel weight value wc(p,q,di) can be according to formula
wc(p,q,di)=exp [- β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain;
The distance weighting value ws(p,q,di) can be according to formula
ws(p,q,di)=exp [- β2×(p-q)2] obtain;
The parallax weighted value wd(p,q,di) can be according to formula
wd(p,q,di)=exp [- β3×(p-q)2-β4×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di) |] obtain;
The I '1(p)=I1(p)cosθ-I2(p-di)sinθ;
The I '2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
The I '1(q)=I1(q)cosθ-I2(q-di)sinθ;
The I '2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
The adjustment angle θ is to preset the value more than 0 ° and less than 90 °;
The first block of pixels ΩpRepresent a block of pixels for including the pixel p in described first image;First fitting
Parameter a value and the second fitting parameter b value is make described to match ENERGY Ed(p,di) be minimum value when corresponding value;The I1
(p) pixel value of the pixel p is represented;The I1(q) pixel q pixel value is represented;The I2(p-di) represent the pixel p
Pixel p-d in corresponding second imageiPixel value;The first weight coefficient β1, the second weight coefficient β2、
The 3rd weight coefficient β3And the 4th weight coefficient β4It is value set in advance.
34. the equipment according to claim any one of 31-33, it is characterised in that
The determining unit, specifically for according to formula
Obtain each candidate disparity values d that will make in the pixel p candidate disparity values setiCandidate energies E (di) for most
During small value, the candidate disparity values of each pixel are defined as the parallax value of each pixel in corresponding described first image;Its
In, the I represents described first image;The second block of pixels NpRepresent to include the one of the pixel p in described first image
Individual block of pixels;The Vp,q(di,dj)=λ × min (| di-dj|,Vmax);The djRepresent pixel q j-th candidates parallax value, j
=1 ..., m;The sum of candidate disparity values in the candidate disparity values set that the m is pixel q;The smoothing factor λ is advance
The value of setting;The difference maximum V of the adjacent pixel parallaxmaxIt is value set in advance.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410101591.1A CN104935911B (en) | 2014-03-18 | 2014-03-18 | A kind of method and device of high dynamic range images synthesis |
PCT/CN2014/089071 WO2015139454A1 (en) | 2014-03-18 | 2014-10-21 | Method and device for synthesizing high dynamic range image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410101591.1A CN104935911B (en) | 2014-03-18 | 2014-03-18 | A kind of method and device of high dynamic range images synthesis |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104935911A CN104935911A (en) | 2015-09-23 |
CN104935911B true CN104935911B (en) | 2017-07-21 |
Family
ID=54122843
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410101591.1A Active CN104935911B (en) | 2014-03-18 | 2014-03-18 | A kind of method and device of high dynamic range images synthesis |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN104935911B (en) |
WO (1) | WO2015139454A1 (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2016326942B2 (en) * | 2015-09-25 | 2019-10-10 | Sony Corporation | Image processing device and image processing method |
US10097747B2 (en) * | 2015-10-21 | 2018-10-09 | Qualcomm Incorporated | Multiple camera autofocus synchronization |
US10341543B2 (en) * | 2016-04-28 | 2019-07-02 | Qualcomm Incorporated | Parallax mask fusion of color and mono images for macrophotography |
US9998720B2 (en) * | 2016-05-11 | 2018-06-12 | Mediatek Inc. | Image processing method for locally adjusting image data of real-time image |
CN108335279B (en) * | 2017-01-20 | 2022-05-17 | 微软技术许可有限责任公司 | Image fusion and HDR imaging |
CN108354435A (en) * | 2017-01-23 | 2018-08-03 | 上海长膳智能科技有限公司 | Automatic cooking apparatus and the method cooked using it |
WO2018209603A1 (en) * | 2017-05-17 | 2018-11-22 | 深圳配天智能技术研究院有限公司 | Image processing method, image processing device, and storage medium |
CN107396082B (en) * | 2017-07-14 | 2020-04-21 | 歌尔股份有限公司 | Image data processing method and device |
CN109819173B (en) * | 2017-11-22 | 2021-12-03 | 浙江舜宇智能光学技术有限公司 | Depth fusion method based on TOF imaging system and TOF camera |
CN107948519B (en) | 2017-11-30 | 2020-03-27 | Oppo广东移动通信有限公司 | Image processing method, device and equipment |
CN108184075B (en) * | 2018-01-17 | 2019-05-10 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating image |
CN110276714B (en) * | 2018-03-16 | 2023-06-06 | 虹软科技股份有限公司 | Method and device for synthesizing rapid scanning panoramic image |
TWI684165B (en) * | 2018-07-02 | 2020-02-01 | 華晶科技股份有限公司 | Image processing method and electronic device |
CN109842791B (en) * | 2019-01-15 | 2020-09-25 | 浙江舜宇光学有限公司 | Image processing method and device |
CN112149493B (en) * | 2020-07-31 | 2022-10-11 | 上海大学 | Road elevation measurement method based on binocular stereo vision |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101887589A (en) * | 2010-06-13 | 2010-11-17 | 东南大学 | Stereoscopic vision-based real low-texture image reconstruction method |
CN102422124A (en) * | 2010-05-31 | 2012-04-18 | 松下电器产业株式会社 | Imaging device, imaging means and program |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9210322B2 (en) * | 2010-12-27 | 2015-12-08 | Dolby Laboratories Licensing Corporation | 3D cameras for HDR |
CN102779334B (en) * | 2012-07-20 | 2015-01-07 | 华为技术有限公司 | Correction method and device of multi-exposure motion image |
-
2014
- 2014-03-18 CN CN201410101591.1A patent/CN104935911B/en active Active
- 2014-10-21 WO PCT/CN2014/089071 patent/WO2015139454A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102422124A (en) * | 2010-05-31 | 2012-04-18 | 松下电器产业株式会社 | Imaging device, imaging means and program |
CN101887589A (en) * | 2010-06-13 | 2010-11-17 | 东南大学 | Stereoscopic vision-based real low-texture image reconstruction method |
Also Published As
Publication number | Publication date |
---|---|
CN104935911A (en) | 2015-09-23 |
WO2015139454A1 (en) | 2015-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104935911B (en) | A kind of method and device of high dynamic range images synthesis | |
CN104661010B (en) | Method and device for establishing three-dimensional model | |
JP4548840B2 (en) | Image processing method, image processing apparatus, program for image processing method, and program recording medium | |
CN101902657B (en) | Method for generating virtual multi-viewpoint images based on depth image layering | |
CN109255831A (en) | The method that single-view face three-dimensional reconstruction and texture based on multi-task learning generate | |
EP2306745B1 (en) | Method and system for creating depth and volume in a 2-D planar image | |
CN106886979A (en) | A kind of image splicing device and image split-joint method | |
CN110163953A (en) | Three-dimensional facial reconstruction method, device, storage medium and electronic device | |
CN101866497A (en) | Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system | |
CN106030661A (en) | View independent 3d scene texturing | |
CN104616286B (en) | Quick semi-automatic multi views depth restorative procedure | |
CN110427968A (en) | A kind of binocular solid matching process based on details enhancing | |
CN107170000B (en) | Stereopsis dense Stereo Matching method based on the optimization of global block | |
CN106256122A (en) | Image processing equipment and image processing method | |
CN101625768A (en) | Three-dimensional human face reconstruction method based on stereoscopic vision | |
CN110660020B (en) | Image super-resolution method of antagonism generation network based on fusion mutual information | |
CN102665086A (en) | Method for obtaining parallax by using region-based local stereo matching | |
CN102075779A (en) | Intermediate view synthesizing method based on block matching disparity estimation | |
CN108171249B (en) | RGBD data-based local descriptor learning method | |
CN110060283B (en) | Multi-measure semi-global dense matching method | |
CN111988593A (en) | Three-dimensional image color correction method and system based on depth residual optimization | |
CN116418961B (en) | Light field display method and system based on three-dimensional scene stylization | |
CN106652037A (en) | Face mapping processing method and apparatus | |
CN115239861A (en) | Face data enhancement method and device, computer equipment and storage medium | |
CN104751508B (en) | The full-automatic of new view is quickly generated and complementing method in the making of 3D three-dimensional films |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |