CN104935911A - Method and device for high-dynamic-range image synthesis - Google Patents

Method and device for high-dynamic-range image synthesis Download PDF

Info

Publication number
CN104935911A
CN104935911A CN201410101591.1A CN201410101591A CN104935911A CN 104935911 A CN104935911 A CN 104935911A CN 201410101591 A CN201410101591 A CN 201410101591A CN 104935911 A CN104935911 A CN 104935911A
Authority
CN
China
Prior art keywords
pixel
mrow
msub
image
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410101591.1A
Other languages
Chinese (zh)
Other versions
CN104935911B (en
Inventor
高山
徐崚峰
区子廉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201410101591.1A priority Critical patent/CN104935911B/en
Priority to PCT/CN2014/089071 priority patent/WO2015139454A1/en
Publication of CN104935911A publication Critical patent/CN104935911A/en
Application granted granted Critical
Publication of CN104935911B publication Critical patent/CN104935911B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides a method and a device for high-dynamic-range image synthesis. The method and the device for high-dynamic-range image synthesis relate to the field of image processing and are used for improving the quality of a high-dynamic-range image. The method comprises the steps of: obtaining a first image and a second image; carrying out binocular stereo matching on the first image and the second image, and obtaining a parallax image; according to the parallax image and the first image, synthetising a virtual view provided with a visual angle identical with that of the second image; obtaining a second gray scale image according to the second image, and obtaining a virtual view gray scale image according to the virtual view; according to the second gray scale image and the virtual view gray scale image, obtaining a high-dynamic-range gray scale image by means of a high-dynamic-range synthesis algorithm; and according to the high-dynamic-range gray scale image, the second gray scale image, the virtual view gray scale image, the second image and the virtual view, obtaining a high-dynamic-range image. The method and the device provided by the invention are suitable for a scene of high-dynamic-range image synthesis.

Description

Method and device for synthesizing high dynamic range image
Technical Field
The present invention relates to the field of image processing, and in particular, to a method and an apparatus for synthesizing an image with a high dynamic range.
Background
The high dynamic range image is an image obtained by adjusting the exposure time of a camera, shooting the same scene for multiple times with different exposure times, and fusing multiple images with different exposure times through an image synthesis technology. Wherein the image exposed for a long time has the details of a clear dark area, and the image exposed for a short time has the details of a clear bright area. Compared with a common image, the high dynamic range image can provide more dynamic range and image details and can better reflect a real environment.
Existing high dynamic range image synthesis techniques are mainly divided into two categories: the first is single-camera high dynamic range image synthesis; the second category is multi-camera high dynamic range image synthesis.
In the multi-camera high dynamic range image synthesis technology, a plurality of cameras shoot a same object at the same time by using different exposure time to obtain a plurality of images, then two images are selected from the plurality of images, then, a disparity map of the two images is obtained according to the relation between corresponding points of the two images, then, one image of the two images is synthesized into a virtual image of the other image according to the disparity map and the two pictures, and finally, a final high dynamic range image is obtained according to the virtual image and the image of the other view angle.
In the process of realizing the multi-camera high dynamic range image synthesis, the inventor finds that at least the following problems exist in the prior art: under the condition of large exposure, the depth extraction of the over-dark and over-bright areas is not accurate enough when the disparity map is obtained, noise is brought to the final high dynamic range image, and in the prior art, when the virtual image synthesis is carried out, only the neighborhood information of the current image is utilized for interpolation, so that the high dynamic range image has chromatic aberration, and the quality of the high dynamic range image is further influenced.
Disclosure of Invention
Embodiments of the present invention provide a method and an apparatus for high dynamic range image synthesis, so as to improve the quality of a high dynamic range image.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
in a first aspect, an embodiment of the present invention provides a method for synthesizing a high dynamic range image, including: acquiring a first image and a second image; the first image and the second image are obtained by shooting the same object at the same time by adopting different exposure degrees; performing binocular stereo matching on the first image and the second image to obtain a disparity map; synthesizing a virtual view having the same visual angle as the second image according to the disparity map and the first image; obtaining a second gray image according to the second image, and obtaining a virtual view gray image according to the virtual view; obtaining a high dynamic range gray scale image through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image; and obtaining a high dynamic range image according to the high dynamic range gray scale image, the second gray scale image, the virtual view gray scale image, the second image and the virtual view.
In a first possible implementation manner of the first aspect, the method further includes: when the virtual view with the same visual angle as the second image is synthesized according to the disparity map and the first image, marking pixels of an occlusion area in the virtual view as hole pixels; the occlusion region is a region generated by the first image and the second image at different angles of shooting the same object; or, after the synthesizing a virtual view having the same viewing angle as the second image according to the disparity map and the first image, before obtaining a second grayscale image according to the second image and obtaining a virtual view grayscale image according to the virtual view, the method further includes: marking noise pixels or the occlusion regions in the virtual view as hole pixels; the noise pixel is generated by a pixel with a parallax value calculation error in the parallax map; the obtaining of the virtual view gray level image according to the virtual view includes: obtaining a virtual view gray level image of the pixel marked with the hole according to the virtual view of the pixel marked with the hole; the obtaining of the high dynamic range gray scale image through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image comprises: obtaining a high dynamic range gray scale image marked with the hole pixels through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image marked with the hole pixels; the obtaining a high dynamic range image according to the high dynamic range grayscale image, the second grayscale image, the virtual view grayscale image, the second image, and the virtual view includes: obtaining a high dynamic range image marked with the hole pixels according to the high dynamic range gray image marked with the hole pixels, the second gray image, the virtual view gray image marked with the hole pixels, the second image and the virtual view marked with the hole pixels; after obtaining a high dynamic range image according to the high dynamic range grayscale image, the second image, and the virtual view grayscale image, the method further comprises: determining a first pixel corresponding to each hole pixel in the high dynamic range image marked with the hole pixels in the second image; acquiring a similarity coefficient between adjacent pixels of each hole pixel in the high dynamic range image and adjacent pixels of the first pixel; and obtaining the pixel value of each hole pixel in the high dynamic range image according to the similarity coefficient and the first pixel.
With reference to the first aspect or the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the performing binocular stereo matching on the first image and the second image to obtain a disparity map includes: obtaining a candidate disparity for each pixel of the first imageA set of values; wherein the set of candidate disparity values comprises at least two candidate disparity values; obtaining the matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first image according to each pixel of the first image, the pixel in the second image corresponding to each pixel of the first image and the candidate disparity value set of each pixel of the first imaged(p,di) (ii) a Wherein p denotes a pixel p, being a pixel of the first image corresponding to the set of candidate disparity values; d isiI =1, … …, k, representing the i-th candidate disparity value of pixel p; the k is the total number of candidate disparity values in the candidate disparity value set of the pixel p; according to the matching energy E of each candidate disparity value in the set of candidate disparity values for each pixel of the first imaged(p,di) Acquiring a parallax value of each pixel in the first image; and combining the parallax value of each pixel in the first image to obtain the parallax map.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the obtaining, according to each pixel of the first image, a pixel in the second image corresponding to each pixel of the first image, and a candidate disparity value set of each pixel of the first image, matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first imaged(p,di) The method comprises the following steps: using a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msub> <mi>I</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding each candidate disparity value d of the pixel p for a set of candidate disparity valuesiMatching energy E ofd(p,di) (ii) a Wherein the values of the first fitting parameter a and the second fitting parameter b are such thatSaid matching energy Ed(p,di) The value corresponding to the minimum value; w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di) (ii) a The first pixel block ΩpRepresenting a block of pixels of said first image comprising said pixel p; said pixel q being adjacent to said pixel p and belonging to said first block of pixels omegapA pixel of (1); said I1(q) represents the pixel value of pixel q; said I2(q-di) Representing a pixel q-d in said second image to which said pixel q correspondsiA pixel value of (a); said wc(p,q,di) Represents a pixel weight value; said ws(p,q,di) Represents a distance weight value; said wd(p,q,di) Representing a disparity weight value.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, the pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
wherein, the I1(p) represents a pixel value of the pixel p; said I2(p-di) Representing said pair of pixels pPixel p-d in said second imageiA pixel value of (a); the first weight coefficient beta1The second weight coefficient beta2The third weight coefficient beta3And the fourth weight coefficient beta4Is a preset value.
With reference to the second possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, the obtaining, according to each pixel of the first image, a pixel in the second image corresponding to each pixel of the first image, and a candidate disparity value set of each pixel of the first image, matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first imaged(p,di) The method comprises the following steps: using a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msubsup> <mi>I</mi> <mn>1</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>2</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding each candidate disparity value d of the pixel p for a set of candidate disparity valuesiMatching energy E ofd(p,di);
Wherein, the w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
The pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
l'1(p)=I1(p)cosθ-I2(p-di) sin theta; l'2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
L'1(q)=I1(q)cosθ-I2(q-di)sinθ;
L'2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
The adjustment angle θ is a value set in advance to be greater than 0 ° and less than 90 °.
With reference to any one of the second to fifth possible implementation manners of the first aspect, in a sixth possible implementation manner of the first aspect, the matching energy E according to each candidate disparity value in the candidate disparity value set of each pixel of the first imaged(p,di) Obtaining a disparity value of each pixel in the first image comprises: according to the formula
<math> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>N</mi> <mi>p</mi> </msub> </mrow> </msub> <msub> <mi>V</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>d</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
Finding each candidate disparity value d in the set of candidate disparity values for the pixel piCandidate energy E (d) ofi) When the value is the minimum value, determining the corresponding candidate parallax value of each pixel in the first image as the parallax value of each pixel; wherein the I represents the first image; the second pixel block NpRepresenting a block of pixels of said first image comprising said pixel p; the V isp,q(di,dj)=λ×min(|di-dj|,Vmax) (ii) a D isjRepresents the jth candidate disparity value for pixel q, j =1, … …, m; the m is the total number of candidate disparity values in the candidate disparity value set of the pixel q; the smoothing coefficient lambda is a preset value; the maximum value V of the difference between the adjacent pixel parallaxesmaxIs a preset value.
With reference to the first aspect or any one of the first to sixth possible implementation manners of the first aspect, in a seventh possible implementation manner of the first aspect, the acquiring a high dynamic range image according to the high dynamic range grayscale map, the second grayscale image, the virtual view grayscale image, the second image, and the virtual view includes: using the formula in turn
<math> <mrow> <msup> <mi>I</mi> <mi>red</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>red</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>red</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
<math> <mrow> <msup> <mi>I</mi> <mi>green</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>green</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>green</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> </mrow> </math> And
<math> <mrow> <msup> <mi>I</mi> <mi>blue</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>blue</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>blue</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
obtaining a red component value I for each pixel in the high dynamic range imagered(e) Green component value Igreen(e) And blue component value Iblue(e) (ii) a Wherein e represents a pixel e in the high dynamic range image; the above-mentionedIgrey(e) A pixel value representing a pixel corresponding to the pixel e in the high dynamic range gray scale map,a pixel value representing a pixel corresponding to the pixel e in the second gray scale image,a pixel value representing a pixel corresponding to the pixel e in the virtual-view grayscale image; the above-mentionedThe above-mentionedAnd saidA red component value, a green component value, and a blue component value respectively representing a pixel corresponding to the pixel e in the second image; the above-mentionedThe above-mentionedAnd saidA red component value, a green component value, and a blue component value respectively representing a pixel corresponding to the pixel e in the virtual view; acquiring a pixel value of each pixel in the high dynamic range image according to the red component value, the green component value and the blue component value of each pixel in the high dynamic range image; and combining the pixel values of each pixel in the high dynamic range image to obtain the high dynamic range image.
With reference to any one of the first to the seventh possible implementation manners of the first aspect, in an eighth possible implementation manner of the first aspect, the marking noise pixels in the virtual view as hole pixels includes: determining at least two second pixels in the second image; the second pixels refer to pixels with the same pixel value; obtaining at least two marker pixels in the virtual view according to the at least two second pixels in the second image; at least two marked pixels in the virtual view are pixels in the virtual view that correspond to the at least two second pixels in the second image, respectively; obtaining an average pixel value of at least two marked pixels in the virtual view; sequentially determining whether a difference value between a pixel value of each of at least two marked pixels in the virtual view and the average pixel value is greater than the noise threshold value; and if the difference value between the pixel value of the marking pixel and the average pixel value is larger than the noise threshold value, determining the marking pixel as a noise pixel, and marking the noise pixel as a hole pixel.
With reference to any one of the first to eighth possible implementation manners of the first aspect, in a ninth possible implementation manner of the first aspect, obtaining a pixel value of any hole pixel r in the high dynamic range image according to the similarity coefficient of the hole pixel r and the first pixel includes: according to the formulaObtaining a pixel value of the hole pixel r; wherein i (r) represents a pixel value of the hole pixel r; said I2(r) represents a pixel value of a pixel in the second image corresponding to the hole pixel r; a is anA similarity coefficient representing the hole pixel r; the N =0,1, … … N; and N is a preset value.
With reference to the ninth possible implementation manner of the first aspect, in a tenth possible implementation manner of the first aspect, for obtaining a neighboring pixel of any hole pixel r in the high dynamic range image, a similarity coefficient between the neighboring pixel of the first pixel and the neighboring pixel of the hole pixel r includes: according to the formula
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mo>[</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>a</mi> <mi>N</mi> </msub> <mo>]</mo> <mo>=</mo> </mtd> </mtr> <mtr> <mtd> <mi>arg</mi> <mi>min</mi> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <msub> <mi>&Psi;</mi> <mi>r</mi> </msub> </mrow> </msub> <mi>exp</mi> <mo>[</mo> <mo>-</mo> <mi>&gamma;</mi> <msup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <mi>s</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mo>.</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>a</mi> <mi>n</mi> </msub> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> </math>
Obtaining a similarity coefficient between an adjacent pixel of any hole pixel r in the high dynamic range image and an adjacent pixel of the first pixel; wherein s represents a neighborhood Ψ of the pixel r in the high dynamic range imagerOne pixel of (1); said I(s) represents a pixel value of said pixel s; said I2(s) represents a pixel value of a pixel in the second image corresponding to the pixel s; the above-mentionedr-s represents the distance between the pixel r and the pixel s; γ is a predetermined weight coefficient indicating a distance between the pixel r and the pixel s.
With reference to the ninth possible implementation manner of the first aspect, in an eleventh possible implementation manner of the first aspect, for obtaining a neighboring pixel of any hole pixel r in the high dynamic range image, a similarity coefficient between the neighboring pixel of the first pixel and the neighboring pixel of the hole pixel r includes: according to the formula
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mo>[</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>a</mi> <mi>N</mi> </msub> <mo>]</mo> <mo>=</mo> </mtd> </mtr> <mtr> <mtd> <mi>arg</mi> <mi>min</mi> <msub> <mi>&rho;</mi> <mn>1</mn> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <msub> <mi>&Phi;</mi> <mi>r</mi> </msub> </mrow> </msub> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>a</mi> <mi>n</mi> </msub> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>&rho;</mi> <mn>2</mn> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>A</mi> </mrow> </msub> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msubsup> <mi>a</mi> <mi>n</mi> <mo>'</mo> </msubsup> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> </math>
Obtaining the adjacent image of any hole pixel r in the high dynamic range imageA pixel having a similarity coefficient with a pixel adjacent to the first pixel; wherein the first scale factor rho1And a second proportionality coefficient rho2Is a preset value; said s represents a neighborhood Φ of said pixel r in said high dynamic range imagerOne pixel of (1); the A represents the high dynamic range image; a'nRepresenting the similarity coefficient obtained when the pixel value of the hole pixel is first calculated.
With reference to the ninth possible implementation manner of the first aspect, in a twelfth possible implementation manner of the first aspect, for obtaining a neighboring pixel of any hole pixel r in the high dynamic range image, a similarity coefficient between the neighboring pixel of the first pixel and the neighboring pixel of the hole pixel r includes: determining whether the hole pixel r has a first hole pixel; the first hole pixel is a hole pixel of which the pixel value is obtained in the adjacent hole pixels of the hole pixel r; and if the first hole pixel is determined, taking the similarity coefficient of the first hole pixel as the similarity coefficient of the hole pixel r.
In a second aspect, an embodiment of the present invention provides a method for synthesizing a disparity map, including: acquiring a first image and a second image; the first image and the second image are obtained by shooting the same object at the same time; acquiring a set of candidate disparity values for each pixel of the first image; wherein the set of candidate disparity values comprises at least two candidate disparity values; obtaining the matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first image according to each pixel of the first image, the pixel in the second image corresponding to each pixel of the first image and the candidate disparity value set of each pixel of the first imaged(p,di) (ii) a Wherein p denotes a pixel p, being a pixel of the first image corresponding to the set of candidate disparity values; d isiI =1, … …, k, representing the i-th candidate disparity value of pixel p; the k is the total number of candidate disparity values in the candidate disparity value set of the pixel p; according to each of a set of candidate disparity values for each pixel of the first imageMatching energy E of candidate disparity valuesd(p,di) Obtaining a parallax value of each pixel in the first image; and combining the parallax value of each pixel in the first image to obtain the parallax map.
In a first possible implementation manner of the second aspect, the obtaining, according to each pixel of the first image, a pixel in the second image corresponding to each pixel of the first image, and a candidate disparity value set of each pixel of the first image, a matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first imaged(p,di) The method comprises the following steps: using a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msub> <mi>I</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding each candidate disparity value d of the pixel p for a set of candidate disparity valuesiMatching energy E ofd(p,di) (ii) a Wherein the value of the first fitting parameter a and the value of the second fitting parameter b are such that the matching energy Ed(p,di) The value corresponding to the minimum value; w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di) (ii) a The first pixel block ΩpRepresenting a block of pixels of said first image comprising said pixel p; said pixel q being adjacent to said pixel p and belonging to said first block of pixels omegapA pixel of (1); said I1(q) represents the pixel value of pixel q; said I2(q-di) Representing a pixel q-d in said second image to which said pixel q correspondsiA pixel value of (a); said wc(p,q,di) Represents a pixel weight value; said ws(p,q,di) Represents a distance weight value; said wd(p,q,di) Representing a disparity weight value.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
wherein, the I1(p) represents a pixel value of the pixel p; said I2(p-di) Representing a pixel p-d in the second image to which the pixel p correspondsiA pixel value of (a); the first weight coefficient beta1The second weight coefficient beta2The third weight coefficient beta3And the fourth weight coefficient beta4Is a preset value.
With reference to the second aspect, in a third possible implementation manner of the second aspect, the obtaining, according to each pixel of the first image, a pixel in the second image corresponding to each pixel of the first image, and a candidate disparity value set of each pixel of the first image, matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first imaged(p,di) The method comprises the following steps: according to each candidate view in the candidate disparity set of the pixel pDifference value using formula
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msubsup> <mi>I</mi> <mn>1</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>2</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding each candidate disparity value d of the pixel p for a set of candidate disparity valuesiMatching energy E ofd(p,di);
Wherein, the w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
The pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
l'1(p)=I1(p)cosθ-I2(p-di)sinθ;
L'2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
L'1(q)=I1(q)cosθ-I2(q-di)sinθ;
L'2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
The adjustment angle θ is a value set in advance to be greater than 0 ° and less than 90 °.
With reference to the second aspect or any one of the first to third possible implementations of the second aspect, in a fourth possible implementation of the second aspect, the matching energy E according to each candidate disparity value in the candidate disparity value set of each pixel of the first image isd(p,di) Obtaining a disparity value of each pixel in the first image comprises: according to the formula
<math> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>N</mi> <mi>p</mi> </msub> </mrow> </msub> <msub> <mi>V</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>d</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
Finding each candidate disparity value d in the set of candidate disparity values for the pixel piCandidate energy E (d) ofi) When the value is the minimum value, determining the corresponding candidate parallax value of each pixel in the first image as the parallax value of each pixel; wherein the I represents the first image; the second pixel block NpRepresenting a block of pixels of said first image comprising said pixel p; the V isp,q(di,dj)=λ×min(|di-dj|,Vmax) (ii) a D isjRepresents the jth candidate disparity value for pixel q, j =1, … …, m; the m is the total number of candidate disparity values in the candidate disparity value set of the pixel q; the smoothing coefficient lambda is a preset value; the maximum value V of the difference between the adjacent pixel parallaxesmaxIs a preset value.
In a third aspect, an embodiment of the present invention provides a high dynamic range image synthesis apparatus, including: an acquisition unit configured to acquire a first image and a second image; the first image and the second image are obtained by shooting the same object at the same time by adopting different exposure degrees; the parallax processing unit is used for carrying out binocular stereo matching on the first image and the second image acquired by the acquisition unit to obtain a parallax image; a virtual view synthesis unit, configured to synthesize a virtual view having the same viewing angle as the second image according to the disparity map obtained by the disparity processing unit and the first image obtained by the obtaining unit; the gray level extraction unit is used for obtaining a second gray level image according to the second image obtained by the acquisition unit and obtaining a virtual view gray level image according to the virtual view synthesized by the virtual view unit; the high dynamic range fusion unit is used for obtaining a high dynamic range gray scale image through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image obtained by the gray scale extraction unit; and the color interpolation unit is used for obtaining a high dynamic range image according to the high dynamic range gray image, the second gray image, the virtual view gray image, the second image and the virtual view.
In a first possible implementation manner of the third aspect, the method further includes: an aperture pixel processing unit: the hole pixel processing unit is used for marking the noise pixels or the shielding areas in the virtual view as hole pixels; the occlusion region is a region generated by the first image and the second image at different angles of shooting the same object; the noise pixel is generated by a pixel with a parallax value calculation error in the parallax map; the grayscale extraction unit is specifically used for obtaining a virtual view grayscale image of the pixel marked with the hole according to the virtual view of the pixel marked with the hole; the high dynamic range fusion unit is specifically used for obtaining a high dynamic range gray scale image marked with the hole pixels through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image marked with the hole pixels; the color interpolation unit is specifically configured to obtain a high dynamic range image marked with a hole pixel according to the high dynamic range grayscale image marked with a hole pixel, the second grayscale image, the virtual view grayscale image marked with a hole pixel, the second image, and the virtual view marked with a hole pixel; the hole pixel processing unit is further configured to determine, in the second image, a first pixel corresponding to each hole pixel in the high dynamic range image marked with the hole pixel; the hole pixel processing unit is further configured to obtain a similarity coefficient between an adjacent pixel of each hole pixel in the high dynamic range image and an adjacent pixel of the first pixel; and obtaining the pixel value of each hole pixel in the high dynamic range image according to the similarity coefficient and the first pixel.
With reference to the third aspect or the first possible implementation manner of the third aspect, in a second possible implementation manner of the third aspect, the disparity processing unit includes: the device comprises an acquisition module, a calculation module, a determination module and a combination module; the obtaining module is configured to obtain a set of candidate disparity values for each pixel of the first image; wherein the set of candidate disparity values comprises at least two candidate disparity values; the computing module is configured to obtain, according to each pixel of the first image, a pixel in the second image corresponding to each pixel of the first image, and a candidate disparity value set of each pixel of the first image, matching energy of each candidate disparity value in the candidate disparity value set of each pixel of the first image; wherein p denotes a pixel p, being a pixel of the first image corresponding to the set of candidate disparity values; the i-th candidate disparity value, i =1, … …, k, representing pixel p; the k is the total number of candidate disparity values in the candidate disparity value set of the pixel p; the determining module is configured to obtain a disparity value of each pixel in the first image according to matching energy of each candidate disparity value in the candidate disparity value set of each pixel in the first image; the combination module is used for combining the parallax value of each pixel in the first image to obtain the parallax map.
With reference to the second possible implementation manner of the third aspect, in a third possible implementation manner of the third aspect, the calculating module is specifically configured to utilize a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msub> <mi>I</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding each candidate disparity value d of the pixel p for a set of candidate disparity valuesiMatching energy E ofd(p,di) (ii) a Wherein the value of the first fitting parameter a and the value of the second fitting parameter b are such that the matching energy Ed(p,di) The value corresponding to the minimum value; w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di) (ii) a The first mentionedOne pixel block omegapRepresenting a block of pixels of said first image comprising said pixel p; said pixel q being adjacent to said pixel p and belonging to said first block of pixels omegapA pixel of (1); said I1(q) represents the pixel value of pixel q; said I2(q-di) Representing a pixel q-d in said second image to which said pixel q correspondsiA pixel value of (a); said wc(p,q,di) Represents a pixel weight value; said ws(p,q,di) Represents a distance weight value; said wd(p,q,di) Representing a disparity weight value.
With reference to the third possible implementation manner of the third aspect, in a fourth possible implementation manner of the third aspect, the pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
wherein, the I1(p) represents a pixel value of the pixel p; said I2(p-di) Representing a pixel p-d in the second image to which the pixel p correspondsiA pixel value of (a); the first weight coefficient beta1The second weight coefficient beta2The third weight coefficient beta3And the fourth weight coefficientβ4Is a preset value.
With reference to the second possible implementation manner of the third aspect, in a fifth possible implementation manner of the third aspect, the calculating module is specifically configured to utilize a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msubsup> <mi>I</mi> <mn>1</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>2</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding each candidate disparity value d of the pixel p for a set of candidate disparity valuesiMatching energy E ofd(p,di);
Wherein, the w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
The pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
l'1(p)=I1(p)cosθ-I2(p-di)sinθ;
L'2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
L'1(q)=I1(q)cosθ-I2(q-di)sinθ;
L'2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
The adjustment angle θ is a value set in advance to be greater than 0 ° and less than 90 °.
With reference to any one of the second to fifth possible implementation manners of the third aspect, in a sixth possible implementation manner of the third aspect, the determining module is specifically configured to determine, according to a formula, a value of the first parameter
<math> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>N</mi> <mi>p</mi> </msub> </mrow> </msub> <msub> <mi>V</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>d</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
Finding each candidate disparity value d in the set of candidate disparity values for the pixel piCandidate energy E (d) ofi) When the value is the minimum value, determining the corresponding candidate parallax value of each pixel in the first image as the parallax value of each pixel; wherein the I represents the first image; the second pixel block NpRepresenting a block of pixels of said first image comprising said pixel p; the V isp,q(di,dj)=λ×min(|di-dj|,Vmax) (ii) a D isjRepresents the jth candidate disparity value for pixel q, j =1, … …, m; the m is the total number of candidate disparity values in the candidate disparity value set of the pixel q; the smoothing coefficient lambda is a preset value; the maximum value V of the difference between the adjacent pixel parallaxesmaxIs a preset value.
With reference to the third aspect or any one of the first to sixth possible implementation manners of the third aspect, in a seventh possible implementation manner of the third aspect, the color interpolation unit is specifically configured to sequentially use formulas
<math> <mrow> <msup> <mi>I</mi> <mi>red</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>red</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>red</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
<math> <mrow> <msup> <mi>I</mi> <mi>green</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>green</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>green</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> </mrow> </math> And
<math> <mrow> <msup> <mi>I</mi> <mi>blue</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>blue</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>blue</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
obtaining a red component value I for each pixel in the high dynamic range imagered(e) Green component value Igreen(e) And blue component value Iblue(e) (ii) a Wherein e represents a pixel e in the high dynamic range image; the above-mentionedIgrey(e) A pixel value representing a pixel corresponding to pixel e in the high dynamic range gray scale map,a pixel value representing a pixel corresponding to the pixel e in the second gray scale image,a pixel value representing a pixel corresponding to the pixel e in the virtual-view grayscale image; the above-mentionedThe above-mentionedAnd saidA red component value, a green component value, and a blue component value respectively representing a pixel corresponding to pixel e in the second image; the above-mentionedThe above-mentionedAnd saidA red component value, a green component value, and a blue component value respectively representing a pixel corresponding to the pixel e in the virtual view; the color interpolation unit is specifically configured to interpolate a color value according to the high dynamic rangeAcquiring a red component value, a green component value and a blue component value of each pixel in the image to obtain a pixel value of each pixel in the high dynamic range image; the color interpolation unit is specifically configured to combine the pixel values of each pixel in the high dynamic range image to obtain the high dynamic range image.
With reference to any one of the first to seventh possible implementation manners of the third aspect, in an eighth possible implementation manner of the third aspect, the hole pixel processing unit is specifically configured to determine at least two second pixels in the second image; the second pixels refer to pixels with the same pixel value; the hole pixel processing unit is specifically configured to obtain at least two marker pixels in the virtual view according to the at least two second pixels in the second image; at least two marked pixels in the virtual view are pixels in the virtual view that correspond to the at least two second pixels in the second image, respectively; the hole pixel processing unit is specifically configured to obtain an average pixel value of at least two marked pixels in the virtual view; the hole pixel processing unit is specifically configured to sequentially determine whether a difference between a pixel value of each of at least two marked pixels in the virtual view and the average pixel value is greater than the noise threshold value; the noise threshold value is a preset value for judging noise; the hole pixel processing unit is specifically configured to determine the flag pixel as a noise pixel and mark the noise pixel as a hole pixel when a difference between the pixel value of the flag pixel and the average pixel value is greater than the noise threshold value.
With reference to any one of the first to eighth possible implementation manners of the third aspect, in a ninth possible implementation manner of the third aspect, the hole pixel processing unit is specifically configured to perform processing according to a formulaObtaining a pixel value of the hole pixel r; wherein I (r) represents the hole imageA pixel value of pixel r; said I2(r) represents a pixel value of a pixel in the second image corresponding to the hole pixel r; a is anA similarity coefficient representing the hole pixel r; the N =0,1, … … N; and N is a preset value.
With reference to the ninth possible implementation manner of the third aspect, in a tenth possible implementation manner of the third aspect, the hole pixel processing unit is specifically configured to use a formula to perform the above processing
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mo>[</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>a</mi> <mi>N</mi> </msub> <mo>]</mo> <mo>=</mo> </mtd> </mtr> <mtr> <mtd> <mi>arg</mi> <mi>min</mi> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <msub> <mi>&Psi;</mi> <mi>r</mi> </msub> </mrow> </msub> <mi>exp</mi> <mo>[</mo> <mo>-</mo> <mi>&gamma;</mi> <msup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <mi>s</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mo>.</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>a</mi> <mi>n</mi> </msub> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> </math>
Obtaining a similarity coefficient between an adjacent pixel of any hole pixel r in the high dynamic range image and an adjacent pixel of the first pixel; wherein s represents a neighborhood Ψ of the pixel r in the high dynamic range imagerOne pixel of (1); said I(s) represents a pixel value of said pixel s; said I2(s) represents a pixel value of a pixel in the second image corresponding to the pixel s; the r-s represents the distance between the pixel r and the pixel s; γ is a predetermined weight coefficient indicating a distance between the pixel r and the pixel s.
With reference to the ninth possible implementation manner of the third aspect, in an eleventh possible implementation manner of the third aspect, the hole pixel processing unit is specifically configured to perform processing according to a formula
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mo>[</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>a</mi> <mi>N</mi> </msub> <mo>]</mo> <mo>=</mo> </mtd> </mtr> <mtr> <mtd> <mi>arg</mi> <mi>min</mi> <msub> <mi>&rho;</mi> <mn>1</mn> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <msub> <mi>&Phi;</mi> <mi>r</mi> </msub> </mrow> </msub> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>a</mi> <mi>n</mi> </msub> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>&rho;</mi> <mn>2</mn> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>A</mi> </mrow> </msub> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msubsup> <mi>a</mi> <mi>n</mi> <mo>'</mo> </msubsup> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> </math>
Obtaining a similarity coefficient between an adjacent pixel of any hole pixel r in the high dynamic range image and an adjacent pixel of the first pixel; wherein the first scale factor rho1And a second proportionality coefficient rho2Is a preset value; said s represents a neighborhood Φ of said pixel r in said high dynamic range imagerOne pixel of (1); the A represents the high dynamic range image; a'nRepresenting the similarity coefficient obtained when the pixel value of the hole pixel is first calculated.
With reference to the ninth possible implementation manner of the third aspect, in a twelfth possible implementation manner of the third aspect, the hole pixel processing unit is specifically configured to determine whether the hole pixel r has a first hole pixel; the first hole pixel is a hole pixel of which the pixel value is obtained in the adjacent hole pixels of the hole pixel r; the hole pixel processing unit is specifically configured to, when it is determined that the first hole pixel exists, use a similarity coefficient of the first hole pixel as a similarity coefficient of the hole pixel r.
In a fourth aspect, an embodiment of the present invention provides an apparatus, including: an acquisition unit configured to acquire a first image and a second image; the first image and the second image are obtained by shooting the same object at the same time; the acquiring unit is further configured to acquire a set of candidate disparity values for each pixel of the first image; wherein the set of candidate disparity values comprises at least two candidate disparity values; a calculating unit, configured to obtain, according to each pixel of the first image, a pixel in the second image corresponding to each pixel of the first image, and a candidate disparity value set of each pixel of the first image, matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first imaged(p,di) (ii) a Wherein p denotes a pixel p, being a pixel of the first image corresponding to the set of candidate disparity values; d isiI =1, … …, k, representing the i-th candidate disparity value of pixel p; the k is the total number of candidate disparity values in the candidate disparity value set of the pixel p; the determining unit is used for determining the matching energy E of each candidate parallax value in the candidate parallax value set of each pixel of the first imaged(p,di) Obtaining a parallax value of each pixel in the first image; and the processing unit is used for combining the parallax value of each pixel in the first image to obtain the parallax map.
In a first possible implementation of the fourth aspect, the computing unit is specifically configured to utilize a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msub> <mi>I</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
To obtainThe pixel p for each candidate disparity value d in the set of candidate disparity valuesiMatching energy E ofd(p,di) (ii) a Wherein the value of the first fitting parameter a and the value of the second fitting parameter b are such that the matching energy Ed(p,di) The value corresponding to the minimum value; w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di) (ii) a The first pixel block ΩpRepresenting a block of pixels of said first image comprising said pixel p; said pixel q being adjacent to said pixel p and belonging to said first block of pixels omegapA pixel of (1); said I1(q) represents the pixel value of pixel q; said I2(q-di) Representing a pixel q-d in said second image to which said pixel q correspondsiA pixel value of (a); said wc(p,q,di) Represents a pixel weight value; said ws(p,q,di) Represents a distance weight value; said wd(p,q,di) Representing a disparity weight value.
With reference to the first possible implementation manner of the fourth aspect, in a second possible implementation manner of the fourth aspect, the pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
wherein, the I1(p) represents a pixel value of the pixel p; said I2(p-di) Representing a pixel p-d in the second image to which the pixel p correspondsiA pixel value of (a); the first weight coefficient beta1The second weight coefficient beta2The third weight coefficient beta3And the fourth weight coefficient beta4Is a preset value.
With reference to the fourth aspect, in a third possible implementation manner of the fourth aspect, the calculating unit is specifically configured to utilize a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msubsup> <mi>I</mi> <mn>1</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>2</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding each candidate disparity value d of the pixel p for a set of candidate disparity valuesiMatching energy E ofd(p,di);
Wherein, the w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
The pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
l'1(p)=I1(p)cosθ-I2(p-di)sinθ;
L'2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
L'1(q)=I1(q)cosθ-I2(q-di)sinθ;
L'2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
The adjustment angle θ is a value set in advance to be greater than 0 ° and less than 90 °.
With reference to the fourth aspect or any one of the first to third possible implementation manners of the fourth aspect, in a fourth possible implementation manner of the fourth aspect, the determining unit is specifically configured to determine the second threshold value according to a formula <math> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>N</mi> <mi>p</mi> </msub> </mrow> </msub> <msub> <mi>V</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>d</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
Finding each candidate disparity value d in the set of candidate disparity values for the pixel piCandidate energy E (d) ofi) When the value is the minimum value, determining the corresponding candidate parallax value of each pixel in the first image as the parallax value of each pixel; wherein the I represents the first image; the second pixel block NpRepresenting a block of pixels of said first image comprising said pixel p; the V isp,q(di,dj)=λ×min(|di-dj|,Vmax) (ii) a D isjRepresents the jth candidate disparity value for pixel q, j =1, … …, m; the m is the total number of candidate disparity values in the candidate disparity value set of the pixel q; the smoothing coefficient lambda is a preset value; the maximum value V of the difference between the adjacent pixel parallaxesmaxIs a preset value.
The method and the device for synthesizing the high dynamic range image, provided by the embodiment of the invention, are used for acquiring a first image and a second image with different exposure degrees, then carrying out binocular stereo matching on the first image and the second image to obtain a disparity map, then synthesizing a virtual view with the same visual angle as the second image according to the disparity map and the first image, then acquiring a second gray scale image according to the second image, acquiring a virtual view gray scale image according to the virtual view, acquiring a high dynamic range gray scale image through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image, and finally acquiring the high dynamic range image according to the high dynamic range gray scale image, the second gray scale image, the virtual view gray scale image, the second image and the virtual view, in this way, the quality of the high dynamic range image is improved since the relationship between adjacent pixels is taken into account when performing virtual view synthesis.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for high dynamic range image synthesis according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a mapping curve according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a coordinate system rotation according to an embodiment of the present invention;
fig. 4 is a schematic diagram of error rates of different stereo matching algorithms according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating another method for high dynamic range image synthesis according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of determining a noisy pixel according to an embodiment of the present invention;
fig. 7 is a flowchart illustrating a method for synthesizing a disparity map according to an embodiment of the present invention;
FIG. 8 is a functional diagram of an embodiment of a high dynamic range image synthesis apparatus;
fig. 9 is a functional schematic diagram of a parallax processing unit of the high dynamic range image synthesizing apparatus shown in fig. 8;
fig. 10 is a functional diagram of another high dynamic range image synthesis apparatus according to an embodiment of the present invention;
fig. 11 is a functional diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
An embodiment of the present invention provides a method for synthesizing an image with a high dynamic range, as shown in fig. 1, including:
101. a first image and a second image are acquired.
The first image and the second image are obtained by shooting the same object at the same time by adopting different exposure degrees.
Note that there is an overlapping area between the first image and the second image.
The first image and the second image are corrected images, and there is only a displacement in the horizontal direction or the vertical direction between the first image and the second image.
The exposure level between the first image and the second image may be higher than that of the second image, or may be higher than that of the first image. The exposure level of the first image and the second image is not limited in this respect.
102. And carrying out binocular stereo matching on the first image and the second image to obtain a disparity map.
It should be noted that the binocular stereo matching is a process of matching corresponding pixels in images of the same object observed from two viewing angles, thereby calculating parallax and obtaining three-dimensional information of the object.
Specifically, the method for obtaining the disparity map by performing binocular stereo matching on the first image and the second image may be any method for obtaining the disparity maps of the two images in the prior art, such as WSAD (Weighted Sum of Absolute Differences algorithm), ANCC (Adaptive Normalized Cross-Correlation algorithm), and the like, and may also be the method provided by the present invention.
The binocular stereo matching algorithm provided by the invention is specifically as follows, and comprises the following steps:
and S1, acquiring a candidate disparity value set of each pixel of the first image.
Wherein the set of candidate disparity values comprises at least two candidate disparity values.
Note that the candidate disparity value corresponds to a depth in a three-dimensional space. Since the depth has a certain range, the candidate disparity value also has a certain range. Each value in this range is a disparity candidate value, which together constitute a set of candidate disparity values for a pixel.
It should be noted that the candidate disparity values in the candidate disparity value set of each pixel in the first image may be the same or different. The invention is not limited in this regard.
S2 candidate disparities for each pixel of the first image, for each pixel of the second image corresponding to each pixel of the first image, and for each pixel of the first imageA set of values, obtaining a matching energy E of each candidate disparity value in the set of candidate disparity values for each pixel of the first imaged(p,di)。
Where p denotes a pixel p, which is a pixel of the first image corresponding to the set of candidate disparity values. diRepresents the i-th candidate disparity value, i =1, … …, k, for pixel p. k is the total number of candidate disparity values in the set of candidate disparity values for pixel p.
It should be noted that, since the set of candidate disparity values for each pixel includes at least two candidate disparity values, k ≧ 2.
Furthermore, the invention provides two calculation matching energies Ed(p,di) The method of (3) is as follows:
the first method comprises the following steps: using a formula based on each candidate disparity value in the candidate disparity set for pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msub> <mi>I</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding a candidate disparity value d for each of a set of candidate disparity values for a pixel piMatching energy E ofd(p,di)。
Wherein the value of the first fitting parameter a and the value of the second fitting parameter b are such that the matching energy E isd(p,di) The minimum value corresponds to the minimum value. w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di). The first pixel block ΩpRepresenting a block of pixels in the first image comprising a pixel p. Pixel q is a pixel adjacent to pixel p belonging to a first block of pixels omegapOf the pixel(s). I is1(q) represents the pixel value of pixel q. I is2(q-di) Representing a pixel q-d in the second image corresponding to pixel qiThe pixel value of (2). w is ac(p,q,di) Representing a pixelA weight value; w is as(p,q,di) Represents a distance weight value; w is ad(p,q,di) Representing a disparity weight value.
Further, the pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
wherein, I1(p) represents a pixel value of pixel p; i is2(p-di) Representing a pixel p-d in the second image to which pixel p correspondsiA pixel value of (a); first weight coefficient beta1A second weight coefficient beta2A third weight coefficient beta3And a fourth weight coefficient beta4Is a preset value.
Note that the pixel weight value w is setc(p,q,di) Distance weight value ws(p,q,di) And a parallax weight value wd(p,q,di) Substituting the calculation formula of (c) to obtain w (p, q, d)i)=exp[-(β23)×(p-q)2-(β14)×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]. Beta can be empirically determined23Is set to 0.040,. beta.14The value of (d) was set to 0.033.
It should be noted that the first pixel block ΩpRepresenting a block of pixels in the first image comprising a pixel p. The first pixel block may be a 3-neighborhood of the pixel p, or may be a 4-neighborhood of the pixel p, and the first pixel block may or may not be centered on the pixel p, and the specific size of the first pixel block and the specific position of the pixel p in the first pixel block are not limited in the present invention.
It should be noted that, the larger the area included in the first pixel block, that is, the more the value of the pixel q is, the smaller the difference between the calculated result and the actual result is.
It should be noted that, when two pictures have been taken, there is a corresponding relationship between pixel values of corresponding points in the two pictures, and it is assumed herein that a smooth mapping function is used to represent the relationship between pixel values of pixels in one image and pixels in the other image corresponding to the pixel values of the other image. In the embodiment of the invention, a linear equation I is selected1(f)=a×I2(g) + b represents the function to be set. Wherein, I2(j) Representing the pixel value, I, of any pixel j in the second image1(f) Representing the pixel value of pixel f in the first image corresponding to pixel j in the second image, a and b are fitting parameters that vary as the pixel position varies. That is, the first fitting parameter a and the second fitting parameter b are different for different pixels.
It should be noted that, since the matching point in the second image corresponding to each pixel in the second image needs to be determined when calculating according to the above formula, f-d may be used to represent the corresponding pixel of any pixel f in the first image in the second graph for convenience of calculation, and in this case, d represents the parallax value between the response pixels of any pixel f in the first image relative to the second image. It should be noted that, since the temporal disparity value between the pixel in the first image and the corresponding pixel in the second image corresponding to the pixel is unknown at this time, the actual disparity value is approximately represented by the candidate disparity value.
It should be noted that a plurality of candidate disparity values are set for each pixel in the first image, the candidate disparity value set of each pixel constitutes a candidate disparity value set of the pixel, and then the candidate disparity value with the smallest difference between the candidate disparity value and the actual disparity value in the candidate disparity value set of the pixel is selected as the calculated disparity value of the pixel. That is to say, the disparity value of the pixel calculated in the embodiment of the present invention is not the actual disparity value of the pixel, but one value in the set of candidate disparity values of the pixel, which is similar to the actual disparity value.
It should be noted that the present invention is used in the examples
Pixel value weighting
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|],
Representing that the closer the colors of pixel p and pixel q in the first image are, the greater the pixel value weight;
distance weight
ws(p,q,di)=exp[-β2×(p-q)2],
Indicating that the closer the actual distance of pixel p to pixel q in the first image, the greater the distance weight;
disparity value weight
wd(p,q,di)=exp[-β3×(p-q)24×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|],
The closer the disparity values representing the pixels p and q in the first image are, the greater the disparity value weight is.
It should be noted that, as shown in fig. 2, it shows that for an over-bright area and an over-dark area, for the same position, one pixel in the first image corresponds to one pixel in the second image, the pixel value in the second image is taken as the horizontal axis, the pixel value in the first image is taken as the vertical axis, and the pixel values of the pixels located at the same position are mapped into the image, so as to obtain a dot matrix in the lower half of fig. 3, and two tangent lines, n and m, are made to the mapping curve formed by the dot matrix. It can be seen from the figure that the mapping curve is more susceptible to noise when the slope of the tangent is larger. To reduce this effect, the coordinate system in fig. 2 may be rotated counterclockwise by 0-90 °, resulting in fig. 3, where the tangent n in the coordinate system in fig. 2 is a straight line with a slope tan α. As the coordinate system is rotated by theta counterclockwise, the slope of the tangent line n in the new coordinate system decreases to tan (alpha-theta).
Illustratively, if the slope of the tangent n in the original coordinate axis is too large, e.g., α ≈ 90 °. Assuming that the new axis is rotated by 45 deg., the slope of the tangent n on the new axis is greatly reduced to tan (alpha-45 deg.. apprxeq.) 1.
Further, the optimization on the first method results in a second method:
using a formula based on each candidate disparity value in the candidate disparity set for pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msubsup> <mi>I</mi> <mn>1</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>2</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding a candidate disparity value d for each of a set of candidate disparity values for a pixel piMatching energy E ofd(p,di)。
Wherein, w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
Pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
I′1(p)=I1(p)cosθ-I2(p-di)sinθ;
I′2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
I′1(q)=I1(q)cosθ-I2(q-di)sinθ;
I′2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
the adjustment angle θ is a value set in advance to be greater than 0 ° and less than 90 °.
Note that since the adjustment angle θ is a value greater than 0 ° and less than 90 °, the value of cos θ and sin θ is between 0 and 1.
It should be noted that, in the following description,
pixel value weight wc(p,q,di)=exp[-β1×|I′1(p)-I1(q)|×|I′2(p-di)-I′2(q-di)|],
Representing that the closer the colors of pixel p and pixel q in the first image are, the greater the pixel value weight;
distance weight
ws(p,q,di)=exp[-β2×(p-q)2],
Indicating that the closer the actual distance of pixel p to pixel q in the first image, the greater the distance weight;
disparity value weight
wd(p,q,di)=exp[-β3×(p-q)24×|I′1(p)-I1(q)|×|I′2(p-di)-I′2(q-di)|],
The closer the disparity values representing the pixels p and q in the first image are, the greater the disparity value weight is.
S3, matching energy E according to each candidate disparity value in the set of candidate disparity values for each pixel of the first imaged(p,di) And obtaining the parallax value of each pixel in the first image.
Further according to the formula
<math> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <mi>E</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>N</mi> <mi>p</mi> </msub> </mrow> </msub> <msub> <mi>V</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>d</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
Finding each candidate disparity value d in a set of candidate disparity values that will make pixel piCandidate energy E (d) ofi) When the value is the minimum value, the candidate parallax value of each pixel in the corresponding first image is determined as the parallax value of each pixel.
Wherein I represents a first image; second pixel block NpRepresenting a block of pixels comprising a pixel p in the first image; vp,q(di,dj)=λ×min(|di-dj|,Vmax);djRepresents the jth candidate disparity value for pixel q, j =1, … …, m; m is the total number of candidate disparity values in the set of candidate disparity values for pixel q; the smoothing coefficient λ is a preset value; maximum value V of parallax difference between adjacent pixelsmaxIs a preset value.
It should be noted that the candidate energy includes two aspects, the first aspect is the sum of the matching energies of each pixel in the first image, and the second aspect is the smoothing energy V of each pixel in the first imagep,q(di,dj) And (4) summing.
Note that the second pixel block NpMay be the same as or different from the first pixel block, and the present invention is not limited thereto.
It should be noted that the formula min (x, y) represents a function taking the smaller value of x and y. min (| d)i-dj|,Vmax) Express getCandidate disparity value d of pixel p in first imageiThe smaller value of the difference between the candidate disparity values and the pixel q and the preset maximum value of the time difference between adjacent pixels. Vmax is a predefined cutoff value, which is to prevent the smoothing energy from being too large, thereby affecting the accurate assignment of the disparity between the foreground and background edges.
It should be noted that smaller candidate energies indicate greater similarity in the first image to the second image. That is, the better the pixels in the first image and the second image match.
It should be noted that, when the disparity value of each pixel of the first image determined in this step is a combination of candidate disparity values of each pixel in the first image and the obtained candidate energy is the minimum value, the corresponding candidate disparity value of each pixel in the first image is the disparity value of the pixel. The candidate disparity value closest to the actual disparity value is selected from the set of candidate disparity values for the pixel. That is, assuming that there are N pixels in the first image and M candidate disparity values for each pixel, the calculated candidate energy values have M in totalNFrom MNAnd selecting the minimum value from the candidate energies, wherein the candidate parallax value of each corresponding pixel is the finally obtained parallax value of each pixel.
Further, in order to simplify the calculation, the candidate disparity value of each pixel in the first image may be quickly obtained by using the existing graph cuts (image segmentation) method, and in this case, it is not necessary to traverse all the candidate disparity values of each pixel.
And S4, combining the parallax values of each pixel in the first image to obtain a parallax map.
The disparity map is an image obtained by arranging the disparity values of each pixel in the first image according to the arrangement order of the original pixels.
It should be noted that, in the case that the exposure ratio of the first image to the second image is 16:1, the exposure ratio of the first image to the second image is 4:1, and the exposure of the first image to the second image is the same, the error rates of the first method, the second method, and the existing WSAD algorithm and ANCC algorithm proposed in the embodiment of the present invention are compared. Fig. 4 shows the comparison results. It can be seen from the figure that, when the exposure ratio of the first image to the second image is particularly large, namely 16:1, the WSAD and ANCC algorithms have a large error rate, while the results of the first method and the second method proposed in this embodiment are very accurate. In the case of the other two exposure ratios, the first and second methods proposed by this embodiment are always superior to the results of WSAD and ANCC.
It should be noted that, although the error rate of the first method and the second method proposed in the embodiments of the present invention in calculating the disparity map is greatly reduced, a small portion of pixels still have a larger difference between the calculated disparity value and the actual disparity value, so that these pixels are regarded as pixels with an error in calculating the disparity value in the disparity map.
103. And synthesizing a virtual view having the same visual angle as the second image according to the disparity map and the first image.
When the first image, the second image, and the disparity map between the first image and the second image are known, a virtual view at an arbitrary angle may be synthesized. In the embodiment of the invention, for the simplicity and convenience of the subsequent image processing method, the virtual view with the same visual angle as the second image is synthesized by utilizing the disparity map and the first image.
It should be noted that, by using the prior art, a virtual view having the same viewing angle as the second image can be synthesized by the first image and the disparity map.
Specifically, in the embodiment of the present invention, there is only a displacement in the horizontal or vertical direction between the first image and the second image. When there is only a horizontal displacement between the first image and the second image, a formula can be usedAnd obtaining the pixel value of each pixel in the virtual view. Wherein, I1(x, y) represents a pixel value of a pixel having x on the abscissa and y on the ordinate in the first image, d represents a parallax value of a pixel having x on the abscissa and y on the ordinate in the first image, and a pixel value of a pixel corresponding to the pixel in the virtual viewThe pixel value corresponding to the pixel in the first image after the pixel is shifted along the horizontal direction to be the parallax value d. When there is only a displacement in the vertical direction between the first image and the second image, a formula can be usedAnd obtaining the pixel value of each pixel in the virtual view. Wherein, I1(x, y) represents a pixel value of a pixel having x on the abscissa and y on the ordinate in the first image, d represents a parallax value of a pixel having x on the abscissa and y on the ordinate in the first image, and a pixel value of a pixel corresponding to the pixel in the virtual viewThe pixel value corresponding to the pixel in the first image after the pixel is shifted along the vertical direction to be the parallax value d.
104. And obtaining a second gray level image according to the second image, and obtaining a virtual view gray level image according to the virtual view.
It should be noted that, the second grayscale image and the virtual-view grayscale image may be obtained by using a method of obtaining a grayscale image of an image according to a color image of the image in the prior art.
It should be noted that, the grayscale image of the color image may be obtained according to the formula Grey = R0.299 + G0.587 + B0.114, the grayscale image of the color image may also be obtained according to the formula Grey = (R + G + B)/3, and other methods for obtaining the grayscale image through the color image in the prior art may also be used, and the present invention is not limited herein.
Wherein R represents a red component of any pixel in the color image, G represents a green component of the pixel, B represents a blue component of the pixel, and Grey represents a gray level of a pixel at a position corresponding to the pixel in the gray level map.
105. And obtaining a high dynamic range gray scale image through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image.
It should be noted that the high dynamic range synthesis algorithm refers to an algorithm for obtaining a high dynamic range image by fusing a plurality of pictures.
It should be noted that, in this step, the second grayscale image and the virtual view grayscale image may be fused by using an existing single-camera high dynamic range image synthesis method or a multi-camera high dynamic range image synthesis method to obtain a high dynamic range grayscale image.
It should be noted that, in the prior art, red, green, and blue of the image to be synthesized are processed separately, and when the high dynamic range gray scale map is calculated by using the prior art, the embodiment of the present invention only needs to process the gray scale of the image to be synthesized.
106. And obtaining a high dynamic range image according to the high dynamic range gray image, the second gray image, the virtual view gray image, the second image and the virtual view.
It should be noted that, since the high dynamic range gray scale map obtained in the previous step does not include information related to red, green and blue colors, the present step determines the red component value, the green component value and the blue component value of each pixel in the high dynamic range gray scale map by using the colored second image and the virtual view.
Specifically, the method comprises the following steps:
t1, using formula in turn
<math> <mrow> <msup> <mi>I</mi> <mi>red</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>red</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>red</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
<math> <mrow> <msup> <mi>I</mi> <mi>green</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>green</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>green</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> </mrow> </math> And
<math> <mrow> <msup> <mi>I</mi> <mi>blue</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>blue</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>blue</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
determining a red component value I for each pixel in a high dynamic range imagered(e) Green component value Igreen(e) And blue component value Iblue(e)。
Where e represents a pixel e in the high dynamic range image;Igrey(e) a pixel value representing a pixel corresponding to pixel e in the high dynamic range gray scale map,a pixel value representing a pixel corresponding to the pixel e in the second gray scale image,a pixel value representing a pixel corresponding to pixel e in the virtual-view grayscale image;anda red component value, a green component value and a blue component value respectively representing a pixel corresponding to the pixel e in the second image;andwhich respectively represent the red, green and blue component values of the pixel corresponding to the pixel e in the virtual view.
It should be noted that η (e) represents a weight coefficient for adjusting the ratio of the color of the second image to the color of the virtual view used in synthesizing the high dynamic range image. η (e) is a value calculated from the relationship between the second grayscale image, the virtual-view grayscale image, and the corresponding pixel on the high-dynamic-range grayscale image.
Note that, it is necessary to calculate each of η (e) for each pixel in the high dynamic range image, and further calculate the red component value I for each pixelred(e) Green component value Igree(e) And blue component value Iblue(e)。
T2, obtaining a pixel value of each pixel in the high dynamic range image according to the red component value, the green component value and the blue component value of each pixel in the high dynamic range image;
it should be noted that, the method for obtaining the pixel value of each pixel in the high dynamic range image according to the red component value, the green component value and the blue component value of each pixel is the same as the method for obtaining the pixel value of a pixel according to the red component value, the green component value and the blue component value of a pixel known in the prior art, and the description of the method is omitted here.
T3, combining the pixel values of each pixel in the high dynamic range image to obtain the high dynamic range image.
Note that the high dynamic range image is formed by combining a plurality of pixels in an arrangement, and each pixel may be expressed by a pixel value.
The method for synthesizing the high dynamic range image, provided by the embodiment of the invention, comprises the steps of firstly obtaining a first image and a second image with different exposure degrees, then carrying out binocular stereo matching on the first image and the second image to obtain a disparity map, then synthesizing a virtual view with the same visual angle as the second image according to the disparity map and the first image, then obtaining a second gray scale image according to the second image, obtaining a virtual view gray scale image according to the virtual view, obtaining the high dynamic range gray scale image through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image, and finally obtaining the high dynamic range image according to the high dynamic range gray scale image, the second gray scale image, the virtual view gray scale image, the second image and the virtual view, so that the relationship between adjacent pixels is considered when the virtual view synthesis is carried out, thus improving the quality of the high dynamic range image.
The method for synthesizing the high dynamic range image provided by the embodiment of the invention, as shown in fig. 5, includes:
501. a first image and a second image are acquired.
The first image and the second image are obtained by shooting the same object at the same time by adopting different exposure degrees.
Specifically, refer to step 101, which is not described herein again.
502. And acquiring a disparity map through a binocular stereo matching algorithm according to the first image and the second image.
Specifically, refer to step 102, which is not described herein again.
It should be noted that the timing of marking the pixels of the occlusion area as hole pixels may be when synthesizing the virtual view or after synthesizing the virtual view. And when the noise pixel is determined, different steps are executed according to different times of marking the pixel of the shielding area as the hole pixel. If the pixels of the occlusion region are marked as hole pixels while synthesizing the virtual view, then execute steps 503a-504a and steps 505-509; if the pixels of the occlusion region are marked as hole pixels when determining the noise pixels after synthesizing the virtual view, steps 503b-504b and step 505-509 are performed.
503a, synthesizing a virtual view having the same viewing angle as the second image according to the disparity map and the first image, and marking pixels of the occlusion region in the virtual view as hole pixels.
The occlusion area is an area generated by the first image and the second image at different angles of shooting the same object.
It should be noted that, according to the disparity map and the first image, synthesizing a virtual view having the same viewing angle as the second image is the same as in step 103, and details thereof are not repeated.
It should be noted that, since the first image and the second image have different viewing angles, when the first image is mapped to a virtual view having the same viewing angle as the second image, pixels in the first image cannot be mapped to pixels in the second image one by one, and these regions without corresponding pixels are mapped to the virtual image to form an occlusion region.
It should be noted that, the method for marking the occlusion area as a hole pixel may be to set all the pixel values of the pixels at the positions corresponding to the noise area in the virtual view to a fixed number, such as 1 or 0; or setting the pixel value of the position corresponding to the shielding area as 0 and the pixel value of the position corresponding to the non-shielding area as 1 by using the image with the same size as the virtual view; other methods of marking pixels are also possible in the prior art, as the invention is not limited in this respect.
503b, synthesizing a virtual view having the same angle of view as the second image according to the disparity map and the first image.
Specifically, refer to step 103, which is not described herein again.
504a, marking noise pixels in the virtual view as hole pixels.
Wherein the noise pixel is generated by a pixel in the disparity map with a wrong disparity value calculation.
It should be noted that, in calculating the disparity value of a pixel, a candidate disparity value is selected from the candidate disparity value set and used as the disparity value of the pixel, so that there may be a certain error in the calculated disparity value, and when the error of a certain pixel exceeds a certain limit, the pixel is determined as the pixel in the disparity map for which the disparity value calculation is wrong. When synthesizing a virtual view having the same viewing angle as the second image from the disparity map and the first image, noise is generated in the synthesized virtual view due to an image having an erroneous disparity value calculation in the disparity map, and pixels corresponding to the noise are defined as noise pixels.
It should be noted that there is approximately a corresponding law between the pixel values of the pixels in the second image and the pixel values of the corresponding pixels in the virtual view. For example, when the pixel value of a certain pixel in the second image is smaller among the pixel values of all the pixels of the second image, then the pixel value of the pixel corresponding to the certain pixel in the virtual image is also smaller among the pixel values of all the pixels of the virtual image; when the pixel value of a certain pixel in the second image is larger among the pixel values of all the pixels of the second image, then the pixel value of the pixel corresponding thereto in the virtual image is also larger among the pixel values of all the pixels of the virtual image. The embodiment of the invention utilizes the rule and marks the pixels which do not accord with the rule as the noise pixels.
For example, as shown in fig. 6, the virtual view includes noise pixels, for the same position, one pixel in the virtual view corresponds to one pixel in the second image, the pixel value in the second image is taken as the horizontal axis, the pixel value in the virtual view is taken as the vertical axis, and the pixel values of the pixels located at the same position in the good image are mapped to the image, so as to obtain the dot matrix in the right coordinate axis of fig. 6. It can be observed that most points form a smooth increasing curve, a few points being further from the mapping curve, which are noise. In our algorithm, we first estimate the mapping curve using all points, then calculate the distance from each point to the mapping curve, and if the distance is large, determine the corresponding pixel of the point in the virtual view as the noise pixel.
Specifically, the method for selecting noise pixels and marking noise pixels may refer to the following steps:
q1, in the second image, at least two second pixels are determined.
The second pixels are pixels with the same pixel value.
Specifically, all pixels of the second image are grouped into a group according to pixel values, and all pixels grouped into the same group are called second pixels.
It should be noted that when the pixel value of a certain pixel is unique among all pixels in the second image, that is, there is no pixel having the same pixel value as the pixel in the second image, no processing is performed on the pixel.
Q2, deriving at least two marker pixels in the virtual view from at least two second pixels in the second image.
Wherein the at least two marked pixels in the virtual view are pixels in the virtual view that correspond to the at least two second pixels in the second image, respectively.
Specifically, the pixels corresponding to the pixels having the same pixel value in the second image are sequentially found in the virtual view.
Q3, obtaining an average pixel value of at least two marker pixels in the virtual view.
Specifically, the pixel value of each of the at least two marked pixels is obtained, and then the pixel values of each of the at least two marked pixels are summed and divided by the number of the marked pixels to obtain the average pixel value of the at least two marked pixels.
Q4, determining in turn whether the difference between the pixel value of each of the at least two tagged pixels in the virtual view and the average pixel value is greater than a noise threshold value.
If the difference between the pixel value of one marked pixel and the corresponding average pixel value is greater than the preset noise threshold value, determining the pixel as a noise pixel; and if the difference value between the pixel value of one marked pixel and the corresponding average pixel value is not greater than the preset noise threshold value, determining that the pixel is not a noise pixel.
And Q5, if the difference value between the pixel value of the marking pixel and the average pixel value is larger than the noise threshold value, determining the marking pixel as a noise pixel, and marking the noise pixel as a hole pixel.
It should be noted that the method for marking the occlusion area may be the same as or different from the method for marking the noise pixel, and the present invention is not limited thereto.
And 504b, marking noise pixels and occlusion areas in the virtual view as hole pixels.
Specifically, the method for marking the occlusion region may refer to the method for marking the occlusion region in step 503a, and is not described herein again.
Specifically, the method for determining and marking the noise pixel may refer to the method for determining and marking the noise pixel in step 504a, and is not described herein again.
505. And obtaining a second gray image according to the second image, and obtaining a virtual view gray image marked with the holey pixel according to the virtual view marked with the holey pixel.
Specifically, for the non-hole pixel processing method, reference may be made to step 104 to obtain a second grayscale image according to the second image, and obtain a virtual view grayscale image according to the virtual view, which is not described herein again.
It should be noted that, for a hole pixel in the virtual view marked with a hole pixel, the pixel corresponding to the hole pixel is directly marked as the hole pixel in the virtual view grayscale image.
506. And obtaining a high dynamic range gray scale image marked with the hole pixels through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image marked with the hole pixels.
Specifically, for the processing method of the non-hole pixels, the high dynamic range gray scale image may be obtained by using a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image in step 105, which is not described herein again.
It should be noted that, for the hole pixels in the virtual-view grayscale image marked with the hole pixels, the corresponding pixels are directly marked as the hole pixels in the high-dynamic-range grayscale image.
507. And obtaining the high dynamic range image marked with the holey pixel according to the high dynamic range gray scale image marked with the holey pixel, the second gray scale image, the virtual view gray scale image marked with the holey pixel, the second image and the virtual view marked with the holey pixel.
Specifically, for the processing method of the non-hole pixels, reference may be made to the high dynamic range gray scale image, the second gray scale image, the virtual view gray scale image, the second image, and the virtual view in step 106 to obtain the high dynamic range image, which is not described herein again.
It should be noted that, since the hole pixels in the high dynamic range gray scale image marked with the hole pixels are obtained according to the hole pixels in the virtual view gray scale image marked with the hole pixels, and the hole pixels in the virtual view gray scale image marked with the hole pixels are obtained according to the virtual view marked with the hole pixels, the positions of the hole pixels in the high dynamic range gray scale image marked with the hole pixels, the hole pixels in the virtual view gray scale image marked with the hole pixels, and the hole pixels in the virtual view gray scale image marked with the hole pixels are the same as the positions of the hole pixels in the virtual view marked with the hole pixels.
It should be noted that, since the high dynamic range grayscale image marked with the hole pixels, the virtual view grayscale image marked with the hole pixels, and the hole pixels in the virtual view marked with the hole pixels are located at the same position, any of the three images can be selected as a standard, and the pixels corresponding to the hole pixels in the image are directly marked as the hole pixels in the high dynamic range image.
508. In the second image, a first pixel corresponding to each hole pixel of the high dynamic range image marked with the hole pixel is determined.
Specifically, for a hole pixel in the high dynamic range image marked with the hole pixel, the pixel corresponding to the hole pixel is directly marked as the hole pixel in the second image.
It should be noted that each hole pixel in the high dynamic range image marked with hole pixels has a corresponding first pixel in the second image.
509. And obtaining a similarity coefficient between the adjacent pixel of each hole pixel in the high dynamic range image and the adjacent pixel of the first pixel, and obtaining a pixel value of each hole pixel in at least one hole pixel in the high dynamic range image according to the similarity coefficient and the first pixel.
It should be noted that, in this embodiment, the similarity relationship between the neighboring pixel of the hole pixel in the high dynamic range image and the neighboring pixel of the corresponding first pixel in the second image is used as the similarity relationship between the hole pixel and its first pixel, and then the pixel value of the hole pixel is finally obtained by using the similarity relationship and the pixel value of the first pixel.
It should be noted that the similarity relationship may be specifically expressed by a similarity coefficient.
Further, for obtaining the similarity coefficient between the neighboring pixel of any hole pixel r in the high dynamic range image and the neighboring pixel of the first pixel, there may be the following three methods.
The first method comprises the following steps:
according to the formula
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mo>[</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>a</mi> <mi>N</mi> </msub> <mo>]</mo> <mo>=</mo> </mtd> </mtr> <mtr> <mtd> <mi>arg</mi> <mi>min</mi> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <msub> <mi>&Psi;</mi> <mi>r</mi> </msub> </mrow> </msub> <mi>exp</mi> <mo>[</mo> <mo>-</mo> <mi>&gamma;</mi> <msup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <mi>s</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mo>.</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>a</mi> <mi>n</mi> </msub> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> </math>
And obtaining the similarity coefficient between the adjacent pixel of any hole pixel r in the high dynamic range image and the adjacent pixel of the first pixel.
Where s represents the neighborhood Ψ of the pixel r in the high dynamic range imagerOne pixel of (1); i(s) represents the pixel value of pixel s; i is2(s) represents a pixel value of a pixel corresponding to the pixel s in the second image; r-s represents the distance between pixel r and pixel s; γ is a weight coefficient which is set in advance and indicates a distance between the pixel r and the pixel s.
Note that the neighborhood Ψ of the pixel rrThe region may or may not be a region centered on the pixel r. For neighborhood ΨrThe specific relationship with the pixel r, the present invention is not limited.
The formula x = arg minF (x) indicates a value of x when f (x) is minimized.
It should be noted that, at this time, for each hole pixel in the high dynamic range image, the similarity coefficient needs to be calculated once. That is, the similarity coefficient is not the same for each hole pixel in the high dynamic range image.
The second method comprises the following steps:
according to the formula
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mo>[</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>a</mi> <mi>N</mi> </msub> <mo>]</mo> <mo>=</mo> </mtd> </mtr> <mtr> <mtd> <mi>arg</mi> <mi>min</mi> <msub> <mi>&rho;</mi> <mn>1</mn> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <msub> <mi>&Phi;</mi> <mi>r</mi> </msub> </mrow> </msub> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>a</mi> <mi>n</mi> </msub> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>&rho;</mi> <mn>2</mn> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>A</mi> </mrow> </msub> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msubsup> <mi>a</mi> <mi>n</mi> <mo>'</mo> </msubsup> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> </math>
And obtaining the similarity coefficient between the adjacent pixel of any hole pixel r in the high dynamic range image and the adjacent pixel of the first pixel.
Wherein the first scale factor rho1And a second proportionality coefficient rho2Is a preset value; s represents the neighborhood Φ of a pixel r in a high dynamic range imagerOne pixel of (1); a represents a high dynamic range image; a'nRepresenting the similarity coefficient obtained when the pixel value of the hole pixel is first calculated.
Note that the pixel block ΦrIs a block Ψ of pixelsrA smaller one.
Note that the neighborhood Φ of the pixel rrThe region may or may not be a region centered on the pixel r. For the neighborhood ΦrThe specific relationship with the pixel r, the present invention is not limited.
A 'is'nThe values determined by the pixel values of each pixel in the high dynamic range image are integrated when the first hole pixel is calculated, and for the sake of simplifying the calculation, a 'determined when the first hole pixel is calculated'nStorage is performed and the pixel values of the hole pixels after calculation can be directly utilized.
It should be noted that the above formula can be used
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mo>[</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>a</mi> <mi>N</mi> </msub> <mo>]</mo> <mo>=</mo> </mtd> </mtr> <mtr> <mtd> <mi>arg</mi> <mi>min</mi> <msub> <mi>&rho;</mi> <mn>1</mn> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <msub> <mi>&Phi;</mi> <mi>r</mi> </msub> </mrow> </msub> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>a</mi> <mi>n</mi> </msub> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>&rho;</mi> <mn>2</mn> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>A</mi> </mrow> </msub> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msubsup> <mi>a</mi> <mi>n</mi> <mo>'</mo> </msubsup> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> </math>
Derivation, variable is AN = [ a ]0,a1,.……,aN](C1+ C2) × AN = (B1+ B2) may be obtained by integrating other parameters, where C1, B1 are related to the coefficients of the first half of the formula and thus to pixel s; c2, B2 are related to the second half of the formula. However, the coefficients in the second half of the formula have no relation to pixel s, so C2 and B2 have no relation to p. When calculating different p, C2, B2 are the same, and there is no need to repeat the calculation, so say a 'determined when calculating the first hole pixel'nAnd the method can be multiplexed in the subsequent calculation without recalculation.
The first scale factor ρ is1Is greater than the second proportionality coefficient rho2The value of (c). For example, the first scale factor ρ may be1Is set to 1, and the second proportionality coefficient p2The value was found to be 0.001.
The third method comprises the following steps:
first, it is determined whether the hole pixel r has a first hole pixel.
The first hole pixel is a hole pixel with a pixel value obtained in an adjacent hole pixel of the hole pixel r.
Secondly, if the first hole pixel is determined, the similarity coefficient of the first hole pixel is used as the similarity coefficient of the hole pixel r.
It should be noted that, in the third method, the similarity coefficient of a hole pixel around which a pixel value has been calculated is used as the similarity coefficient of the hole pixel, so as to simplify the step of calculating the similarity coefficient.
It should be noted that the first method or the second method and the third method can be combined to calculate the similarity coefficient of each hole pixel in the high dynamic range image.
Further, obtaining a pixel value of a hole pixel r according to a similarity coefficient of any hole pixel r of at least one hole pixel in the high dynamic range image and the first pixel comprises: according to the formulaThe pixel value of the hole pixel r is obtained.
Wherein, I (r) represents the pixel value of the hole pixel r; i is2(r) represents a pixel value of a pixel corresponding to the hole pixel r in the second image; a isnA similarity coefficient representing a hole pixel r; n =0,1, … … N; n is a predetermined value.
It should be noted that, in the embodiment of the present invention, corresponding pixels in the two images represent pixels having the same position in the two images.
It should be noted that the larger the value of N is set, the more accurate the calculated result is, but at the same time, the complexity of calculation increases accordingly.
The method for synthesizing the high dynamic range image, provided by the embodiment of the invention, comprises the steps of firstly obtaining a first image and a second image with different exposure degrees, then carrying out binocular stereo matching on the first image and the second image to obtain a disparity map, then synthesizing a virtual view with the same visual angle as the second image according to the disparity map and the first image, then obtaining a second gray scale image according to the second image, obtaining a virtual view gray scale image according to the virtual view, obtaining the high dynamic range gray scale image through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image, finally obtaining the high dynamic range image according to the high dynamic range gray scale image, the second gray scale image, the virtual view gray scale image, the second image and the virtual view, marking a noise pixel with larger influence on an occlusion area and a picture as a hole pixel in the whole process of obtaining the high dynamic range image, and finally, estimating the relationship between the hole pixel and the pixel corresponding to the hole pixel in the second image according to the relationship between the adjacent pixel of the hole pixel and the adjacent pixel of the pixel corresponding to the hole pixel in the second image, and further solving the pixel value of the hole pixel. In this way, the relation between adjacent pixels is considered when virtual view synthesis is carried out, and the occlusion region and the noise pixel are further processed, so that the quality of the high dynamic range image is improved.
An embodiment of the present invention provides a method for synthesizing a disparity map, as shown in fig. 7, including:
701. a first image and a second image are acquired.
The first image and the second image are obtained by shooting the same object at the same time.
Specifically, refer to step 101, which is not described herein again.
702. A set of candidate disparity values for each pixel of the first image is acquired.
Wherein the set of candidate disparity values comprises at least two candidate disparity values.
Specifically, reference may be made to S1 in step 102, which is not described herein again.
703. Obtaining the matching energy E of each candidate parallax value in the candidate parallax value set of each pixel of the first image according to each pixel of the first image, the pixel in the second image corresponding to each pixel of the first image and the candidate parallax value set of each pixel of the first imaged(p,di)。
Wherein p denotes a pixel p, which is a pixel of the first image corresponding to the set of candidate disparity values; diI =1, … …, k, representing the i-th candidate disparity value of pixel p; k is the total number of candidate disparity values in the set of candidate disparity values for pixel p.
It should be noted that, since the set of candidate disparity values for each pixel includes at least two candidate disparity values, k ≧ 2.
Furthermore, the invention provides two calculation matching energies E (p, d)i) The method of (3) is as follows:
the first method comprises the following steps: using a formula based on each candidate disparity value in the candidate disparity set for pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msub> <mi>I</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding a candidate disparity value d for each of a set of candidate disparity values for a pixel piMatching energy E ofd(p,di)。
Wherein the value of the first fitting parameter a and the value of the second fitting parameter b are such that the matching energy E isd(p,di) The value corresponding to the minimum value; w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di) (ii) a The first pixel block ΩpRepresenting a block of pixels comprising a pixel p in the first image; pixel q is a pixel adjacent to pixel p belonging to a first block of pixels omegapA pixel of (1); i is1(q) represents the pixel value of pixel q; i is2(q-di) Representing a pixel q-d in the second image corresponding to pixel qiA pixel value of (a); w is ac(p,q,di) Represents a pixel weight value; w is as(p,q,di) Represents a distance weight value; w is ad(p,q,di) Representing a disparity weight value.
Further, the pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
wherein, the I1(p) represents a pixel value of the pixel p; said I2(p-di) Representing a pixel p-d in the second image to which the pixel p correspondsiA pixel value of (a); the first weight coefficient beta1The second weight coefficient beta2The third weight coefficient beta3And the fourth weight coefficient beta4Is a preset value.
Specifically, reference may be made to the first method of S2 in step 102, which is not described herein again.
The second method comprises the following steps: using a formula based on each candidate disparity value in the candidate disparity set for pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msubsup> <mi>I</mi> <mn>1</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>2</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding a candidate disparity value d for each of a set of candidate disparity values for a pixel piMatching energy E (p, d)i)。
Wherein, w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
Pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
I′1(p)=I1(p)cosθ-I2(p-di)sinθ;
I′2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
I′1(q)=I1(q)cosθ-I2(q-di)sinθ;
I′2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
the adjustment angle θ is a value set in advance to be greater than 0 ° and less than 90 °.
Specifically, reference may be made to the second method of S2 in step 102, which is not described herein again.
704. According to the matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first imaged(p,di) And obtaining the parallax value of each pixel in the first image.
Further according to the formula
<math> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <mi>E</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>N</mi> <mi>p</mi> </msub> </mrow> </msub> <msub> <mi>V</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>d</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
Finding each candidate disparity value d in a set of candidate disparity values that will make pixel piCandidate energy E (d) ofi) When the value is the minimum value, the candidate parallax value of each pixel in the corresponding first image is determined as the parallax value of each pixel.
Wherein I represents a first image; second pixel block NpRepresenting a block of pixels comprising a pixel p in the first image; vp,q(di,dj)=λ×min(|di-dj|,Vmax);djRepresents the jth candidate disparity value for pixel q, j =1, … …, m; m is the total number of candidate disparity values in the set of candidate disparity values for pixel q; the smoothing coefficient λ is a preset value; maximum value V of parallax difference between adjacent pixelsmaxIs a preset value.
Specifically, reference may be made to S3 in step 102, which is not described herein again.
705. And combining the parallax value of each pixel in the first image to obtain a parallax map.
Specifically, reference may be made to S4 in step 102, which is not described herein again.
The embodiment of the invention provides a method for synthesizing a disparity map, which comprises the steps of acquiring a first image and a second image, acquiring a candidate disparity value set of each pixel of the first image, and acquiring matching energy E (p, d) of each candidate disparity value in the candidate disparity value set of each pixel of the first image according to each pixel of the first image, a pixel in the second image corresponding to each pixel of the first image and the candidate disparity value set of each pixel of the first imagei) Then, the matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first image is determinedd(p,di) And acquiring a parallax value of each pixel in the first image, and finally combining the parallax values of each pixel in the first image to acquire a parallax map, so that when the parallax value of each pixel is calculated, the error between the finally obtained parallax value and the parallax value is greatly reduced, and the quality of the high-dynamic-range image is improved.
Fig. 8 is a functional schematic diagram of a high dynamic range image synthesis apparatus according to an embodiment of the present invention. Referring to fig. 8, the high dynamic range image synthesizing apparatus includes: an acquisition unit 801, a parallax processing unit 802, a virtual view synthesis unit 803, a gradation extraction unit 804, a high dynamic range fusion unit 805, and a color interpolation unit 806.
An acquiring unit 801 is used for acquiring a first image and a second image.
The first image and the second image are obtained by shooting the same object at the same time by adopting different exposure degrees.
The parallax processing unit 802 is configured to perform binocular stereo matching on the first image and the second image acquired by the acquisition unit 801 to obtain a parallax map.
Further, as shown in fig. 9, the parallax processing unit 802 includes: an acquisition module 8021, a calculation module 8022, a determination module 8023, and a combination module 8024.
An obtaining module 8021, configured to obtain a set of candidate disparity values for each pixel of the first image.
Wherein the set of candidate disparity values comprises at least two candidate disparity values.
A calculating module 8022, configured to obtain, according to each pixel of the first image, a pixel in the second image corresponding to each pixel of the first image, and the candidate disparity value set of each pixel of the first image, matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first imaged(p,di)。
Where p denotes a pixel p, which is a pixel of the first image corresponding to the set of candidate disparity values. diRepresents the i-th candidate disparity value, i =1, … …, k, for pixel p. k is the total number of candidate disparity values in the set of candidate disparity values for pixel p.
Specifically, the calculation module 8022 obtains the matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first imaged(p,di) There are two methods:
the first method, the calculation module 8022, is specifically configured to use a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msub> <mi>I</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding a candidate disparity value d for each of a set of candidate disparity values for a pixel piMatching energy E ofd(p,di)。
Wherein the value of the first fitting parameter a and the value of the second fitting parameter b are such that the matching energy Ed(p,di) The value corresponding to the minimum value; w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di) (ii) a The first pixel block ΩpRepresenting a block of pixels of said first image comprising said pixel p; said pixel q being adjacent to said pixel p and belonging to said first block of pixels omegapA pixel of (1); said I1(q) represents the pixel value of pixel q; said I2(q-di) Representing a pixel q-d in said second image to which said pixel q correspondsiA pixel value of (a); said wc(p,q,di) Represents a pixel weight value; said ws(p,q,di) Represents a distance weight value; said wd(p,q,di) Representing a disparity weight value.
Further, the pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
wherein, the I1(p) of said pixel pA pixel value; said I2(p-di) Representing a pixel p-d in the second image to which the pixel p correspondsiA pixel value of (a); the first weight coefficient beta1The second weight coefficient beta2The third weight coefficient beta3And the fourth weight coefficient beta4Is a preset value.
The second method, the calculation module 8022, is specifically configured to use a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msubsup> <mi>I</mi> <mn>1</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>2</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding a candidate disparity value d for each of a set of candidate disparity values for a pixel piMatching energy E ofd(p,di)。
Wherein, w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
Pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I′1(p)-I′1(q)|×|′2(p-di)-I′2(q-di)|]Obtaining;
I′1(p)=I1(p)cosθ-I2(p-di)sinθ;
I′2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
I′1(q)=I1(q)cosθ-I2(q-di)sinθ;
I′2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
the adjustment angle θ is a value set in advance to be greater than 0 ° and less than 90 °.
A determining module 8023, configured to determine a matching energy E according to each candidate disparity value of the set of candidate disparity values for each pixel of the first imaged(p,di) And obtaining the parallax value of each pixel in the first image.
In particular, the determination module 8023 is specifically configured to determine according to a formula
<math> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>N</mi> <mi>p</mi> </msub> </mrow> </msub> <msub> <mi>V</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>d</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
Finding each candidate disparity value d in a set of candidate disparity values that will make pixel piCandidate energy E (d) ofi) When the value is the minimum value, the candidate parallax value of each pixel in the corresponding first image is determined as the parallax value of each pixel.
Wherein I represents a first image; second pixel block NpRepresenting a block of pixels comprising a pixel p in the first image; vp,q(di,dj)=λ×min(|di-dj|,Vmax);djRepresents the jth candidate disparity value for pixel q, j =1, … …, m; m is the total number of candidate disparity values in the set of candidate disparity values for pixel q; the smoothing coefficient λ is a preset value; maximum value V of parallax difference between adjacent pixelsmaxIs a preset value.
The combining module 8024 is configured to combine the disparity values of each pixel in the first image to obtain a disparity map.
A virtual view synthesizing unit 803, configured to synthesize a virtual view having the same viewing angle as the second image, with the disparity map obtained by the disparity processing unit 802 and the first image acquired by the acquiring unit 801.
A gray level extracting unit 804, configured to obtain a second gray level image according to the second image obtained by the obtaining unit 801, and obtain a virtual view gray level image according to the virtual view synthesized by the virtual view unit 803.
Further, the grayscale extraction unit 804 is specifically configured to obtain a virtual-view grayscale image of the pixel marked with the hole according to the virtual view of the pixel marked with the hole.
And a high dynamic range fusion unit 805, configured to obtain a high dynamic range grayscale image through a high dynamic range synthesis algorithm according to the second grayscale image and the virtual view grayscale image obtained by the grayscale extraction unit 804.
Further, the high dynamic range fusion unit 805 is specifically configured to obtain a high dynamic range gray scale image marked with the hole pixel through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image marked with the hole pixel.
And a color interpolation unit 806, configured to obtain a high dynamic range image according to the high dynamic range grayscale image, the second grayscale image, the virtual view grayscale image, the second image, and the virtual view.
Further, the color interpolation unit 806 is specifically configured to sequentially utilize the formula
<math> <mrow> <msup> <mi>I</mi> <mi>red</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>red</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>red</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
<math> <mrow> <msup> <mi>I</mi> <mi>green</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>green</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>green</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> </mrow> </math> And
<math> <mrow> <msup> <mi>I</mi> <mi>blue</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>blue</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>blue</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
determining a red component value I for each pixel in a high dynamic range imagered(e) Green component value Igreen(e) And blue component value Iblue(e)。
Where e represents a pixel e in the high dynamic range image;Igrey(e) a pixel value representing a pixel corresponding to pixel e in the high dynamic range gray scale map,a pixel value representing a pixel corresponding to the pixel e in the second gray scale image,a pixel value representing a pixel corresponding to pixel e in the virtual-view grayscale image;anda red component value, a green component value and a blue component value respectively representing a pixel corresponding to the pixel e in the second image;andwhich respectively represent the red, green and blue component values of the pixel corresponding to the pixel e in the virtual view.
The color interpolation unit 806 is specifically configured to obtain a pixel value of each pixel in the high dynamic range image according to the red component value, the green component value, and the blue component value of each pixel in the high dynamic range image.
The color interpolation unit 806 is specifically configured to combine the pixel values of each pixel in the high dynamic range image to obtain the high dynamic range image.
Further, the color interpolation unit 806 is specifically configured to obtain the high dynamic range image marked with the hole pixel according to the high dynamic range grayscale image marked with the hole pixel, the second grayscale image, the virtual view grayscale image marked with the hole pixel, the second image, and the virtual view marked with the hole pixel.
Further, as shown in fig. 10, the high dynamic range image synthesizing apparatus further includes: an aperture pixel processing unit 807.
A hole pixel processing unit 807 for marking noise pixels or the occlusion regions in the virtual view as hole pixels.
The occlusion area is an area generated by shooting the same object by the first image and the second image at different angles; the noise pixel is generated by a pixel in the disparity map in which the disparity value is calculated incorrectly.
In particular, the hole pixel processing unit 807 is configured to determine at least two second pixels in the second image.
The second pixels are pixels with the same pixel value.
The hole pixel processing unit 807 is specifically configured to obtain at least two marked pixels in the virtual view according to at least two second pixels in the second image.
Wherein the at least two marked pixels in the virtual view are pixels in the virtual view that correspond to the at least two second pixels in the second image, respectively.
The hole pixel processing unit 807 is specifically configured to obtain an average pixel value of at least two marked pixels in the virtual view.
The hole pixel processing unit 807 is specifically configured to sequentially determine whether a difference between a pixel value of each of at least two marked pixels in the virtual view and the average pixel value is greater than a noise threshold.
The noise threshold value is a value set in advance for determining noise.
The hole pixel processing unit 807 is specifically configured to determine the marked pixel as a noise pixel and mark the noise pixel as a hole pixel when a difference between a pixel value of the marked pixel and the average pixel value is greater than a noise threshold value.
The hole pixel processing unit 807 is further configured to determine a first pixel corresponding to each hole pixel in the high dynamic range image marked with the hole pixel in the second image.
The hole pixel processing unit 807 is further configured to obtain a similarity coefficient between a neighboring pixel of each hole pixel in the high dynamic range image and a neighboring pixel of the first pixel; and obtaining the pixel value of each hole pixel in at least one hole pixel in the high dynamic range image according to the similarity coefficient and the first pixel.
Specifically, the hole pixel processing unit 807 obtains the neighboring pixels of each hole pixel in the high dynamic range image, and the similarity coefficient between the neighboring pixels and the first pixel may be obtained by the following three methods:
the first method, the hole pixel processing unit 807, is specifically configured to generate a formula
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mo>[</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>a</mi> <mi>N</mi> </msub> <mo>]</mo> <mo>=</mo> </mtd> </mtr> <mtr> <mtd> <mi>arg</mi> <mi>min</mi> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <msub> <mi>&Psi;</mi> <mi>r</mi> </msub> </mrow> </msub> <mi>exp</mi> <mo>[</mo> <mo>-</mo> <mi>&gamma;</mi> <msup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <mi>s</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mo>.</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>a</mi> <mi>n</mi> </msub> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> </math>
And obtaining the similarity coefficient between the adjacent pixel of any hole pixel r in the high dynamic range image and the adjacent pixel of the first pixel.
Where s represents the neighborhood Ψ of the pixel r in the high dynamic range imagerOne pixel of (1); i(s) represents the pixel value of pixel s; i is2(s) represents a pixel value of a pixel corresponding to the pixel s in the second image; r-s represents the distance between pixel r and pixel s; gamma is preA weight coefficient indicating a distance between the pixel r and the pixel s is set.
The second method, the hole pixel processing unit 807, is specifically configured to calculate the sum of the equations
[a0,a1,.……,aN]=
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mi>arg</mi> <mi>min</mi> <msub> <mi>&rho;</mi> <mn>1</mn> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <msub> <mi>&Phi;</mi> <mi>r</mi> </msub> </mrow> </msub> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>a</mi> <mi>n</mi> </msub> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>&rho;</mi> <mn>2</mn> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>A</mi> </mrow> </msub> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msubsup> <mi>a</mi> <mi>n</mi> <mo>'</mo> </msubsup> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> </math>
And obtaining the similarity coefficient between the adjacent pixel of any hole pixel r in the high dynamic range image and the adjacent pixel of the first pixel.
Wherein the first scale factor rho1And a second proportionality coefficient rho2Is a preset value; s represents the neighborhood Φ of a pixel r in a high dynamic range imagerOne pixel of (1); a represents a high dynamic range image; a'nRepresenting the similarity coefficient obtained when the pixel value of the hole pixel is first calculated.
The third method, hole pixel processing unit 807, is specifically configured to determine whether hole pixel r has a first hole pixel; and under the condition that the first hole pixel is determined, taking the similarity coefficient of the first hole pixel as the similarity coefficient of the hole pixel r.
The first hole pixel is a hole pixel with a pixel value obtained in an adjacent hole pixel of the hole pixel r.
In particular, the hole pixel processing unit 807 is specifically configured to perform the following formulaThe pixel value of the hole pixel r is obtained.
Wherein, I (r) represents the pixel value of the hole pixel r; i is2(r) represents a pixel value of a pixel corresponding to the hole pixel r in the second image; a isnA similarity coefficient representing a hole pixel r; n =0,1, … … N; n is a predetermined value.
The high dynamic range image synthesis device provided by the embodiment of the invention firstly obtains a first image and a second image with different exposure degrees, then carries out binocular stereo matching on the first image and the second image to obtain a disparity map, then synthesizes a virtual view with the same visual angle as the second image according to the disparity map and the first image, then obtains a second gray scale image according to the second image, obtains a virtual view gray scale image according to the virtual view, obtains a high dynamic range gray scale image through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image, finally obtains the high dynamic range image according to the high dynamic range gray scale image, the second gray scale image, the virtual view gray scale image, the second image and the virtual view, and marks noise pixels with larger influence on an occlusion area and a picture as hole pixels in the whole process of obtaining the high dynamic range image, and finally, estimating the relationship between the hole pixel and the pixel corresponding to the hole pixel in the second image according to the relationship between the adjacent pixel of the hole pixel and the adjacent pixel of the pixel corresponding to the hole pixel in the second image, and further solving the pixel value of the hole pixel. In this way, the relation between adjacent pixels is considered when virtual view synthesis is performed, and the occlusion region and the noise pixel are further processed, so that the quality of the high dynamic range image is improved.
Fig. 11 is a functional diagram of an apparatus according to an embodiment of the present invention. Referring to fig. 11, the apparatus includes: an acquisition unit 1101, a calculation unit 1102, a determination unit 1103, and a processing unit 1104.
An acquiring unit 1101 is configured to acquire a first image and a second image.
The first image and the second image are obtained by shooting the same object at the same time;
the acquiring unit 1101 is further configured to acquire a set of candidate disparity values for each pixel of the first image.
Wherein the set of candidate disparity values comprises at least two candidate disparity values.
A calculating unit 1102, configured to obtain, according to each pixel of the first image, a pixel in the second image corresponding to each pixel of the first image, and the candidate disparity value set of each pixel of the first image, matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first imaged(p,di)。
Wherein p denotes a pixel p, which is a pixel of the first image corresponding to the set of candidate disparity values; diI =1, … …, k, representing the i-th candidate disparity value of pixel p; k is the total number of candidate disparity values in the set of candidate disparity values for pixel p.
Further, the calculation unit 1102 obtains the matching energy E (p, d) of each candidate disparity value in the set of candidate disparity values for each pixel of the first imagei) There are two methods:
the first method, the calculating unit 1102, is specifically configured to utilize a formula according to each candidate disparity value in the candidate disparity set for the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msub> <mi>I</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding a candidate disparity value d for each of a set of candidate disparity values for a pixel piMatching energy E ofd(p,di)。
Wherein the value of the first fitting parameter a and the value of the second fitting parameter b are such that the matching energy E isd(p,di) The value corresponding to the minimum value; w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di) (ii) a The first pixel block ΩpRepresenting a block of pixels comprising a pixel p in the first image; pixel q is a pixel adjacent to pixel p belonging to a first block of pixels omegapA pixel of (1); i is1(q) represents the pixel value of pixel q; i is2(q-di) Representing a pixel q-d in the second image corresponding to pixel qiA pixel value of (a); w is ac(p,q,di) Represents a pixel weight value; w is as(p,q,di) Represents a distance weight value; w is ad(p,q,di) Representing a disparity weight value.
Further, the pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
wherein, I1(p) represents a pixel value of pixel p; i is2(p-di) Representing a pixel p-d in the second image to which pixel p correspondsiA pixel value of (a); first weight coefficient beta1A second weight coefficient beta2A third weight coefficient beta3And a fourth weight coefficient beta4Is a preset value.
The second method, the computing unit 1102, is specifically configured to utilize a formula according to each candidate disparity value in the candidate disparity set for the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msubsup> <mi>I</mi> <mn>1</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>2</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding a candidate disparity value d for each of a set of candidate disparity values for a pixel piMatching energy E ofd(p,di)。
Wherein, w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
Pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
I′1(p)=I1(p)cosθ-I2(p-di)sinθ;
I′2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
I′1(q)=I1(q)cosθ-I2(q-di)sinθ;
I′2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
the adjustment angle θ is a value set in advance to be greater than 0 ° and less than 90 °.
A determining unit 1104 for determining a matching energy E of each candidate disparity value in the set of candidate disparity values for each pixel of the first imaged(p,di) And obtaining the parallax value of each pixel in the first image.
Further, the determining unit 1104 is specifically configured to determine the formula
<math> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>N</mi> <mi>p</mi> </msub> </mrow> </msub> <msub> <mi>V</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>d</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
Finding each candidate disparity value d in a set of candidate disparity values that will make pixel piCandidate energy E (d) ofi) When the value is the minimum value, the candidate parallax value of each pixel in the corresponding first image is determined as the parallax value of each pixel.
Wherein I represents a first image; second pixel block NpRepresenting a block of pixels comprising a pixel p in the first image; vp,q(di,dj)=λ×min(|di-dj|,Vmax);djRepresents the jth candidate disparity value for pixel q, j =1, … …, m; m is the total number of candidate disparity values in the set of candidate disparity values for pixel q; the smoothing coefficient λ is a preset value; maximum value V of parallax difference between adjacent pixelsmaxIs a preset value.
The processing unit 1105 is configured to combine the disparity values of each pixel in the first image to obtain a disparity map.
An embodiment of the present invention provides an apparatus for acquiring a first image and a second image, acquiring a set of candidate disparity values for each pixel of the first image, and then acquiring matching energy E (p, d) of each candidate disparity value in the set of candidate disparity values for each pixel of the first image according to each pixel of the first image, a pixel in the second image corresponding to each pixel of the first image, and the set of candidate disparity values for each pixel of the first imagei) Then based on the candidate disparity value of each pixel of the first imageMatching energy E of each candidate disparity value in the setd(p,di) And acquiring a parallax value of each pixel in the first image, and finally combining the parallax values of each pixel in the first image to acquire a parallax map, so that when the parallax value of each pixel is calculated, the error between the finally obtained parallax value and the parallax value is greatly reduced, and the quality of the high-dynamic-range image is improved.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be physically included alone, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (36)

1. A method of high dynamic range image synthesis, comprising:
acquiring a first image and a second image; the first image and the second image are obtained by shooting the same object at the same time by adopting different exposure degrees;
performing binocular stereo matching on the first image and the second image to obtain a disparity map;
synthesizing a virtual view having the same visual angle as the second image according to the disparity map and the first image;
obtaining a second gray image according to the second image, and obtaining a virtual view gray image according to the virtual view;
obtaining a high dynamic range gray scale image through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image;
and obtaining a high dynamic range image according to the high dynamic range gray scale image, the second gray scale image, the virtual view gray scale image, the second image and the virtual view.
2. The method of claim 1, further comprising:
when the virtual view with the same visual angle as the second image is synthesized according to the disparity map and the first image, marking pixels of an occlusion area in the virtual view as hole pixels; the occlusion region is a region generated by the first image and the second image at different angles of shooting the same object; or,
after the synthesizing a virtual view having the same viewing angle as the second image according to the disparity map and the first image, before obtaining a second grayscale image according to the second image and obtaining a virtual view grayscale image according to the virtual view, the method further includes: marking noise pixels or the occlusion regions in the virtual view as hole pixels; the noise pixel is generated by a pixel with a parallax value calculation error in the parallax map;
the obtaining of the virtual view gray level image according to the virtual view includes:
obtaining a virtual view gray level image of the pixel marked with the hole according to the virtual view of the pixel marked with the hole;
the obtaining of the high dynamic range gray scale image through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image comprises:
obtaining a high dynamic range gray scale image marked with the hole pixels through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image marked with the hole pixels;
the obtaining a high dynamic range image according to the high dynamic range grayscale image, the second grayscale image, the virtual view grayscale image, the second image, and the virtual view includes:
obtaining a high dynamic range image marked with the hole pixels according to the high dynamic range gray image marked with the hole pixels, the second gray image, the virtual view gray image marked with the hole pixels, the second image and the virtual view marked with the hole pixels;
after obtaining a high dynamic range image according to the high dynamic range grayscale image, the second image, and the virtual view grayscale image, the method further comprises:
determining a first pixel corresponding to each hole pixel in the high dynamic range image marked with the hole pixels in the second image;
acquiring a similarity coefficient between adjacent pixels of each hole pixel in the high dynamic range image and adjacent pixels of the first pixel; and obtaining the pixel value of each hole pixel in the high dynamic range image according to the similarity coefficient and the first pixel.
3. The method according to claim 1 or 2,
the binocular stereo matching of the first image and the second image to obtain the disparity map comprises the following steps:
acquiring a set of candidate disparity values for each pixel of the first image; wherein the set of candidate disparity values comprises at least two candidate disparity values;
obtaining the matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first image according to each pixel of the first image, the pixel in the second image corresponding to each pixel of the first image and the candidate disparity value set of each pixel of the first imaged(p,di) (ii) a Wherein p denotes a pixel p, being a pixel of the first image corresponding to the set of candidate disparity values; d isiI =1, … …, k, representing the i-th candidate disparity value of pixel p; the k is the total number of candidate disparity values in the candidate disparity value set of the pixel p;
according to the matching energy E of each candidate disparity value in the set of candidate disparity values for each pixel of the first imaged(p,di) Obtaining a parallax value of each pixel in the first image;
and combining the parallax value of each pixel in the first image to obtain the parallax map.
4. The method of claim 3,
obtaining, according to each pixel of the first image, a pixel in the second image corresponding to each pixel of the first image, and a candidate disparity value set of each pixel of the first image, matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first imaged(p,di) The method comprises the following steps:
using a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msub> <mi>I</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding each candidate disparity value d of the pixel p for a set of candidate disparity valuesiMatching energy E ofd(p,di) (ii) a Wherein the value of the first fitting parameter a and the value of the second fitting parameter b are such that the matching energy Ed(p,di) The value corresponding to the minimum value; w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di) (ii) a The first pixel block ΩpRepresenting a block of pixels of said first image comprising said pixel p; the pixel q is the sum ofThe pixel p is adjacent and belongs to the first pixel block omegapA pixel of (1); said I1(q) represents the pixel value of pixel q; said I2(q-di) Representing a pixel q-d in said second image to which said pixel q correspondsiA pixel value of (a); said wc(p,q,di) Represents a pixel weight value; said ws(p,q,di) Represents a distance weight value; said wd(p,q,di) Representing a disparity weight value.
5. The method of claim 4,
the pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
wherein, the I1(p) represents a pixel value of the pixel p; said I2(p-di) Representing a pixel p-d in the second image to which the pixel p correspondsiA pixel value of (a); the first weight coefficient beta1The second weight coefficient beta2The third weight coefficient beta3And the fourth weight coefficient beta4Is a preset value.
6. The method of claim 3,
the rootObtaining the matching energy E of each candidate parallax value in the candidate parallax value set of each pixel of the first image according to each pixel of the first image, the pixel in the second image corresponding to each pixel of the first image and the candidate parallax value set of each pixel of the first imaged(p,di) The method comprises the following steps:
using a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msubsup> <mi>I</mi> <mn>1</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>2</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding each candidate disparity value d of the pixel p for a set of candidate disparity valuesiMatching energy E ofd(p,di);
Wherein, the w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
The pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
l'1(p)=I1(p)cosθ-I2(p-di)sinθ;
L'2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
L'1(q)=I1(q)cosθ-I2(q-di)sinθ;
L'2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
The adjustment angle θ is a value set in advance to be greater than 0 ° and less than 90 °.
7. The method according to any one of claims 3 to 6,
the matching energy E according to each candidate disparity value in the set of candidate disparity values for each pixel of the first imaged(p,di) Obtaining a disparity value of each pixel in the first image comprises:
according to the formula
<math> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>N</mi> <mi>p</mi> </msub> </mrow> </msub> <msub> <mi>V</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>d</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
Finding each candidate disparity value d in the set of candidate disparity values for the pixel piCandidate energy E (d) ofi) When the value is the minimum value, determining the corresponding candidate parallax value of each pixel in the first image as the parallax value of each pixel; wherein the I represents the first image; the second pixel block NpRepresenting a block of pixels of said first image comprising said pixel p; the V isp,q(di,dj)=λ×min(|di-dj|,Vmax) (ii) a D isjRepresents the jth candidate disparity value for pixel q, j =1, … …, m; the m is the total number of candidate disparity values in the candidate disparity value set of the pixel q; the smoothing coefficient lambda is a preset value; the maximum value V of the difference between the adjacent pixel parallaxesmaxIs a preset value.
8. The method according to any one of claims 1 to 7,
the obtaining a high dynamic range image according to the high dynamic range grayscale image, the second grayscale image, the virtual view grayscale image, the second image, and the virtual view includes:
using the formula in turn
<math> <mrow> <msup> <mi>I</mi> <mi>red</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>red</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>red</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
<math> <mrow> <msup> <mi>I</mi> <mi>green</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>green</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>green</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> </mrow> </math> And
<math> <mrow> <msup> <mi>I</mi> <mi>blue</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>blue</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>blue</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
obtaining a red component value I for each pixel in the high dynamic range imagered(e) Green component value Igreen(e) And blue component value Iblue(e) (ii) a Wherein e represents a pixel e in the high dynamic range image; the above-mentionedIgrey(e) A pixel value representing a pixel corresponding to the pixel e in the high dynamic range gray scale map,a pixel value representing a pixel corresponding to the pixel e in the second gray scale image,a pixel value representing a pixel corresponding to the pixel e in the virtual-view grayscale image; the above-mentionedThe above-mentionedAnd saidA red component value, a green component value, and a blue component value respectively representing a pixel corresponding to the pixel e in the second image; the above-mentionedThe above-mentionedAnd saidA red component value, a green component value, and a blue component value respectively representing a pixel corresponding to the pixel e in the virtual view;
obtaining a pixel value of each pixel in the high dynamic range image according to the red component value, the green component value and the blue component value of each pixel in the high dynamic range image;
and combining the pixel values of each pixel in the high dynamic range image to obtain the high dynamic range image.
9. The method according to any one of claims 2 to 8,
said marking noise pixels in the virtual view as hole pixels comprises:
determining at least two second pixels in the second image; the second pixels refer to pixels with the same pixel value;
obtaining at least two marker pixels in the virtual view according to the at least two second pixels in the second image; at least two marked pixels in the virtual view are pixels in the virtual view that correspond to the at least two second pixels in the second image, respectively;
acquiring an average pixel value of at least two marked pixels in the virtual view;
sequentially determining whether a difference value between a pixel value of each of at least two marked pixels in the virtual view and the average pixel value is greater than the noise threshold value;
and if the difference value between the pixel value of the marking pixel and the average pixel value is larger than the noise threshold value, determining the marking pixel as a noise pixel, and marking the noise pixel as a hole pixel.
10. The method according to any one of claims 2 to 9,
for any hole pixel r in the high dynamic range image and the first pixel, obtaining a pixel value of the hole pixel r includes:
according to the formulaObtaining a pixel value of the hole pixel r; wherein i (r) represents a pixel value of the hole pixel r; said I2(r) represents a pixel value of a pixel in the second image corresponding to the hole pixel r; a is anA similarity coefficient representing the hole pixel r; the N =0,1, … … N; and N is a preset value.
11. The method of claim 10,
for obtaining the neighboring pixel of any hole pixel r in the high dynamic range image, the similarity coefficient between the neighboring pixel of the first pixel and the neighboring pixel of the hole pixel r includes:
according to the formula
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mo>[</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>a</mi> <mi>N</mi> </msub> <mo>]</mo> <mo>=</mo> </mtd> </mtr> <mtr> <mtd> <mi>arg</mi> <mi>min</mi> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <msub> <mi>&Psi;</mi> <mi>r</mi> </msub> </mrow> </msub> <mi>exp</mi> <mo>[</mo> <mo>-</mo> <mi>&gamma;</mi> <msup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <mi>s</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mo>.</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>a</mi> <mi>n</mi> </msub> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> </math>
Obtaining a similarity coefficient between an adjacent pixel of any hole pixel r in the high dynamic range image and an adjacent pixel of the first pixel; wherein s represents a neighborhood Ψ of the pixel r in the high dynamic range imagerOne pixel of (1); said I(s) represents a pixel value of said pixel s; said I2(s) represents a pixel value of a pixel in the second image corresponding to the pixel s; the r-s represents the distance between the pixel r and the pixel s; γ is a predetermined weight coefficient indicating a distance between the pixel r and the pixel s.
12. The method of claim 10,
for obtaining the neighboring pixel of any hole pixel r in the high dynamic range image, the similarity coefficient between the neighboring pixel of the first pixel and the neighboring pixel of the hole pixel r includes:
according to the formula
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mo>[</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>a</mi> <mi>N</mi> </msub> <mo>]</mo> <mo>=</mo> </mtd> </mtr> <mtr> <mtd> <mi>arg</mi> <mi>min</mi> <msub> <mi>&rho;</mi> <mn>1</mn> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <msub> <mi>&Phi;</mi> <mi>r</mi> </msub> </mrow> </msub> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>a</mi> <mi>n</mi> </msub> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>&rho;</mi> <mn>2</mn> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>A</mi> </mrow> </msub> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msubsup> <mi>a</mi> <mi>n</mi> <mo>'</mo> </msubsup> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> </math>
Obtaining a similarity coefficient between an adjacent pixel of any hole pixel r in the high dynamic range image and an adjacent pixel of the first pixel; wherein the first scale factor rho1And a second proportionality coefficient rho2Is a preset value; said s represents a neighborhood Φ of said pixel r in said high dynamic range imagerOne pixel of (1); the A represents the high dynamic range image; a'nThe similarity coefficient obtained when the pixel value of the hole pixel is calculated for the first time is represented.
13. The method of claim 10,
for obtaining the neighboring pixel of any hole pixel r in the high dynamic range image, the similarity coefficient between the neighboring pixel of the first pixel and the neighboring pixel of the hole pixel r includes:
determining whether the hole pixel r has a first hole pixel; the first hole pixel is a hole pixel of which the pixel value is obtained in the adjacent hole pixels of the hole pixel r;
and if the first hole pixel is determined, taking the similarity coefficient of the first hole pixel as the similarity coefficient of the hole pixel r.
14. A method of synthesizing a disparity map, comprising:
acquiring a first image and a second image; the first image and the second image are obtained by shooting the same object at the same time;
acquiring a set of candidate disparity values for each pixel of the first image; wherein the set of candidate disparity values comprises at least two candidate disparity values;
obtaining the matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first image according to each pixel of the first image, the pixel in the second image corresponding to each pixel of the first image and the candidate disparity value set of each pixel of the first imaged(p,di) (ii) a Wherein p denotes a pixel p, being a pixel of the first image corresponding to the set of candidate disparity values; d isiAn i-th candidate disparity value, i =1, … …, k, representing said pixel p; the k is a total number of candidate disparity values in the set of candidate disparity values for the pixel p;
according to the matching energy E of each candidate disparity value in the set of candidate disparity values for each pixel of the first imaged(p,di) Obtaining a parallax value of each pixel in the first image;
and combining the parallax value of each pixel in the first image to obtain the parallax map.
15. The method of claim 14,
obtaining, according to each pixel of the first image, a pixel in the second image corresponding to each pixel of the first image, and a candidate disparity value set of each pixel of the first image, matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first imaged(p,di) The method comprises the following steps:
using a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msub> <mi>I</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding each candidate disparity value d of the pixel p for a set of candidate disparity valuesiMatching energy E ofd(p,di) (ii) a Wherein the value of the first fitting parameter a and the value of the second fitting parameter b are such that the matching energy Ed(p,di) The value corresponding to the minimum value; w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di) (ii) a The first pixel block ΩpRepresenting a block of pixels of said first image comprising said pixel p; said pixel q being adjacent to said pixel p and belonging to said first block of pixels omegapA pixel of (1); said I1(q) represents the pixel value of pixel q; said I2(q-di) Representing a pixel q-d in said second image to which said pixel q correspondsiA pixel value of (a); the above-mentionedwc(p,q,di) Represents a pixel weight value; said ws(p,q,di) Represents a distance weight value; said wd(p,q,di) Representing a disparity weight value.
16. The method of claim 15,
the pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
wherein, the I1(p) represents a pixel value of the pixel p; said I2(p-di) Representing a pixel p-d in the second image to which the pixel p correspondsiA pixel value of (a); the first weight coefficient beta1The second weight coefficient beta2The third weight coefficient beta3And the fourth weight coefficient beta4Is a preset value.
17. The method of claim 14,
obtaining each of the candidate disparity value sets of each pixel of the first image according to each pixel of the first image, the pixel in the second image corresponding to each pixel of the first image, and the candidate disparity value set of each pixel of the first imageMatching energy E of a candidate disparity valued(p,di) The method comprises the following steps:
using a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msubsup> <mi>I</mi> <mn>1</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>2</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding each candidate disparity value d of the pixel p for a set of candidate disparity valuesiMatching energy E ofd(p,di);
Wherein, the w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
The pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
l'1(p)=I1(p)cosθ-I2(p-di)sinθ;
L'2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
L'1(q)=I1(q)cosθ-I2(q-di)sinθ;
L'2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
The adjustment angle θ is a value set in advance to be greater than 0 ° and less than 90 °.
18. The method according to any one of claims 14 to 17,
the matching energy E according to each candidate disparity value in the set of candidate disparity values for each pixel of the first imaged(p,di) Obtaining a disparity value of each pixel in the first image comprises:
according to the formula
<math> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>N</mi> <mi>p</mi> </msub> </mrow> </msub> <msub> <mi>V</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>d</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
Finding each candidate disparity value d in the set of candidate disparity values for the pixel piCandidate energy E (d) ofi) When the value is the minimum value, determining the corresponding candidate parallax value of each pixel in the first image as the parallax value of each pixel; wherein the I represents the first image; the second pixel block NpRepresenting a block of pixels of said first image comprising said pixel p; the V isp,q(di,dj)=λ×min(|di-dj|,Vmax) (ii) a D isjRepresents the jth candidate disparity value for pixel q, j =1, … …, m; the m is the total number of candidate disparity values in the candidate disparity value set of the pixel q; the smoothing coefficient lambda is a preset value; the maximum value V of the difference between the adjacent pixel parallaxesmaxIs a preset value.
19. A high dynamic range image synthesizing apparatus characterized by comprising:
an acquisition unit configured to acquire a first image and a second image; the first image and the second image are obtained by shooting the same object at the same time by adopting different exposure degrees;
the parallax processing unit is used for carrying out binocular stereo matching on the first image and the second image acquired by the acquisition unit to obtain a parallax image;
a virtual view synthesis unit, configured to synthesize a virtual view having the same viewing angle as the second image according to the disparity map obtained by the disparity processing unit and the first image obtained by the obtaining unit;
the gray level extraction unit is used for obtaining a second gray level image according to the second image obtained by the acquisition unit and obtaining a virtual view gray level image according to the virtual view synthesized by the virtual view unit;
the high dynamic range fusion unit is used for obtaining a high dynamic range gray scale image through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image obtained by the gray scale extraction unit;
and the color interpolation unit is used for obtaining a high dynamic range image according to the high dynamic range gray image, the second gray image, the virtual view gray image, the second image and the virtual view.
20. The apparatus of claim 19, further comprising: an aperture pixel processing unit:
the hole pixel processing unit is used for marking the noise pixels and/or the shielding areas in the virtual view as hole pixels; the occlusion region is a region generated by the first image and the second image at different angles of shooting the same object; the noise pixel is generated by a pixel with a parallax value calculation error in the parallax map;
the grayscale extraction unit is specifically used for obtaining a virtual view grayscale image of the pixel marked with the hole according to the virtual view of the pixel marked with the hole;
the high dynamic range fusion unit is specifically used for obtaining a high dynamic range gray scale image marked with the hole pixels through a high dynamic range synthesis algorithm according to the second gray scale image and the virtual view gray scale image marked with the hole pixels;
the color interpolation unit is specifically configured to obtain a high dynamic range image marked with a hole pixel according to the high dynamic range grayscale image marked with a hole pixel, the second grayscale image, the virtual view grayscale image marked with a hole pixel, the second image, and the virtual view marked with a hole pixel;
the hole pixel processing unit is further configured to determine, in the second image, a first pixel corresponding to each hole pixel in the high dynamic range image marked with the hole pixel;
the hole pixel processing unit is further configured to obtain a similarity coefficient between an adjacent pixel of each hole pixel in the high dynamic range image and an adjacent pixel of the first pixel; and obtaining the pixel value of each hole pixel in the high dynamic range image according to the similarity coefficient and the first pixel.
21. The apparatus according to claim 19 or 20, wherein the disparity processing unit comprises: the device comprises an acquisition module, a calculation module, a determination module and a combination module;
the obtaining module is configured to obtain a set of candidate disparity values for each pixel of the first image; wherein the set of candidate disparity values comprises at least two candidate disparity values;
the computing module is configured to obtain, according to each pixel of the first image, a pixel in the second image corresponding to each pixel of the first image, and a candidate disparity value set of each pixel of the first image, a matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first imaged(p,di) (ii) a Wherein p denotes a pixel p, being a pixel of the first image corresponding to the set of candidate disparity values; d isiI =1, … …, k, representing the i-th candidate disparity value of pixel p; the k is the total number of candidate disparity values in the candidate disparity value set of the pixel p;
the determining module is used for determining the matching energy E of each candidate parallax value in the candidate parallax value set of each pixel of the first imaged(p,di) Obtaining a parallax value of each pixel in the first image;
the combination module is used for combining the parallax value of each pixel in the first image to obtain the parallax map.
22. The apparatus of claim 21,
the computing module is specifically configured to utilize a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msub> <mi>I</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding each candidate disparity value d of the pixel p for a set of candidate disparity valuesiMatching energy E ofd(p,di) (ii) a Wherein the value of the first fitting parameter a and the value of the second fitting parameter b are such that the matching energy Ed(p,di) The value corresponding to the minimum value; w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di) (ii) a The first pixel block ΩpRepresenting a block of pixels of said first image comprising said pixel p; said pixel q being adjacent to said pixel p and belonging to said first block of pixels omegapA pixel of (1); said I1(q) represents the pixel value of pixel q; said I2(q-di) Representing a pixel q-d in said second image to which said pixel q correspondsiA pixel value of (a); said wc(p,q,di) Represents a pixel weight value; said ws(p,q,di) Represents a distance weight value; said wd(p,q,di) Representing a disparity weight value.
23. The apparatus of claim 22,
the pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
wherein, the I1(p) represents a pixel value of the pixel p; said I2(p-di) Representing a pixel p-d in the second image to which the pixel p correspondsiA pixel value of (a); the first weight coefficient beta1The second weight coefficient beta2The third weight coefficient beta3And the fourth weight coefficient beta4Is a preset value.
24. The apparatus of claim 21,
the computing module is specifically configured to utilize a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msubsup> <mi>I</mi> <mn>1</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>2</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding each candidate disparity value d of the pixel p for a set of candidate disparity valuesiMatching energy E ofd(p,di);
Wherein, the w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
The pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
l'1(p)=I1(p)cosθ-I2(p-di)sinθ;
L'2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
L'1(q)=I1(q)cosθ-I2(q-di)sinθ;
L'2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
The adjustment angle θ is a value set in advance to be greater than 0 ° and less than 90 °.
25. The apparatus according to any one of claims 21 to 24,
the determining module is specifically configured to determine the formula
<math> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>N</mi> <mi>p</mi> </msub> </mrow> </msub> <msub> <mi>V</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>d</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
Finding each candidate disparity value d in the set of candidate disparity values for the pixel piCandidate energy E (d) ofi) When the value is the minimum value, determining the corresponding candidate parallax value of each pixel in the first image as the parallax value of each pixel; wherein the I represents the first image; the second pixel block NpRepresenting a block of pixels of said first image comprising said pixel p; the V isp,q(di,dj)=λ×min(|di-dj|,Vmax) (ii) a D isjRepresents the jth candidate disparity value for pixel q, j =1, … …, m; the m is the total number of candidate disparity values in the candidate disparity value set of the pixel q; the smoothing coefficient lambda is a preset value; the maximum value V of the difference between the adjacent pixel parallaxesmaxIs a preset value.
26. The apparatus according to any one of claims 19 to 25,
the color interpolation unit is specifically configured to sequentially utilize a formula
<math> <mrow> <msup> <mi>I</mi> <mi>red</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>red</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>red</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
<math> <mrow> <msup> <mi>I</mi> <mi>green</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>green</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>green</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> </mrow> </math> And
<math> <mrow> <msup> <mi>I</mi> <mi>blue</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <msubsup> <mi>I</mi> <mn>2</mn> <mi>blue</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>+</mo> <mo>[</mo> <mn>1</mn> <mo>-</mo> <mi>&eta;</mi> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>3</mn> <mi>blue</mi> </msubsup> <mrow> <mo>(</mo> <mi>e</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
finding the high dynamic rangeRed component value I of each pixel in the surrounding imagered(e) Green component value Igreen(e) And blue component value Iblue(e) (ii) a Wherein e represents a pixel e in the high dynamic range image; the above-mentionedIgrey(e) A pixel value representing a pixel corresponding to the pixel e in the high dynamic range gray scale map,a pixel value representing a pixel corresponding to the pixel e in the second gray scale image,a pixel value representing a pixel corresponding to the pixel e in the virtual-view grayscale image; the above-mentionedThe above-mentionedAnd saidA red component value, a green component value, and a blue component value respectively representing a pixel corresponding to pixel e in the second image; the above-mentionedThe above-mentionedAnd saidA red component value and a green component value respectively representing a pixel corresponding to the pixel e in the virtual viewComponent values and blue component values;
the color interpolation unit is specifically configured to obtain a pixel value of each pixel in the high dynamic range image according to a red component value, a green component value, and a blue component value of each pixel in the high dynamic range image;
the color interpolation unit is specifically configured to combine pixel values of each pixel in the high dynamic range image to obtain the high dynamic range image.
27. The apparatus of any one of claims 20-26,
the hole pixel processing unit is specifically configured to determine at least two second pixels in the second image; the second pixels refer to pixels with the same pixel value;
the hole pixel processing unit is specifically configured to obtain at least two marker pixels in the virtual view according to the at least two second pixels in the second image; at least two marked pixels in the virtual view are pixels in the virtual view that correspond to the at least two second pixels in the second image, respectively;
the hole pixel processing unit is specifically configured to obtain an average pixel value of at least two marked pixels in the virtual view;
the hole pixel processing unit is specifically configured to sequentially determine whether a difference between a pixel value of each of at least two marked pixels in the virtual view and the average pixel value is greater than the noise threshold value;
the hole pixel processing unit is specifically configured to determine the flag pixel as a noise pixel and mark the noise pixel as a hole pixel when a difference between the pixel value of the flag pixel and the average pixel value is greater than the noise threshold value.
28. The apparatus according to any one of claims 20-27,
the hole pixel processing unit is specifically used for processing the hole pixel according to a formulaObtaining a pixel value of the hole pixel r; wherein i (r) represents a pixel value of the hole pixel r; said I2(r) represents a pixel value of a pixel in the second image corresponding to the hole pixel r; a is anA similarity coefficient representing the hole pixel r; the N =0,1, … … N; and N is a preset value.
29. The apparatus of claim 28,
the hole pixel processing unit is specifically used for processing the hole pixel according to a formula
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mo>[</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>a</mi> <mi>N</mi> </msub> <mo>]</mo> <mo>=</mo> </mtd> </mtr> <mtr> <mtd> <mi>arg</mi> <mi>min</mi> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <msub> <mi>&Psi;</mi> <mi>r</mi> </msub> </mrow> </msub> <mi>exp</mi> <mo>[</mo> <mo>-</mo> <mi>&gamma;</mi> <msup> <mrow> <mo>(</mo> <mi>r</mi> <mo>-</mo> <mi>s</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mo>.</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>a</mi> <mi>n</mi> </msub> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> </math>
Obtaining a similarity coefficient between an adjacent pixel of any hole pixel r in the high dynamic range image and an adjacent pixel of the first pixel; wherein s represents a neighborhood Ψ of the pixel r in the high dynamic range imagerOne pixel of (1); said I(s) represents a pixel value of said pixel s; said I2(s) represents a pixel value of a pixel in the second image corresponding to the pixel s; the r-s represents the distance between the pixel r and the pixel s; γ is a predetermined weight coefficient indicating a distance between the pixel r and the pixel s.
30. The apparatus of claim 28,
the hole pixel processing unit is specifically used for processing the hole pixel according to a formula
<math> <mfenced open='' close=''> <mtable> <mtr> <mtd> <mo>[</mo> <msub> <mi>a</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <msub> <mi>a</mi> <mi>N</mi> </msub> <mo>]</mo> <mo>=</mo> </mtd> </mtr> <mtr> <mtd> <mi>arg</mi> <mi>min</mi> <msub> <mi>&rho;</mi> <mn>1</mn> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <msub> <mi>&Phi;</mi> <mi>r</mi> </msub> </mrow> </msub> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>a</mi> <mi>n</mi> </msub> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>&rho;</mi> <mn>2</mn> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>s</mi> <mo>&Element;</mo> <mi>A</mi> </mrow> </msub> <msup> <mrow> <mo>[</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>-</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <msubsup> <mi>a</mi> <mi>n</mi> <mo>'</mo> </msubsup> <msup> <msub> <mi>I</mi> <mn>2</mn> </msub> <mi>n</mi> </msup> <mrow> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> </math>
Obtaining a similarity coefficient between an adjacent pixel of any hole pixel r in the high dynamic range image and an adjacent pixel of the first pixel; wherein the first scale factor rho1And a second proportionality coefficient rho2Is a preset value; said s represents a neighborhood Φ of said pixel r in said high dynamic range imagerOne pixel of (1); the A represents the high dynamic range image; a'nThe similarity coefficient obtained when the pixel value of the hole pixel is calculated for the first time is represented.
31. The apparatus of claim 28,
the hole pixel processing unit is specifically configured to determine whether the hole pixel r has a first hole pixel; the first hole pixel is a hole pixel of which the pixel value is obtained in the adjacent hole pixels of the hole pixel r;
the hole pixel processing unit is specifically configured to, when it is determined that the first hole pixel exists, use a similarity coefficient of the first hole pixel as a similarity coefficient of the hole pixel r.
32. An apparatus, comprising:
an acquisition unit configured to acquire a first image and a second image; the first image and the second image are obtained by shooting the same object at the same time;
the acquiring unit is further configured to acquire a set of candidate disparity values for each pixel of the first image; wherein the set of candidate disparity values comprises at least two candidate disparity values;
a calculating unit, configured to obtain, according to each pixel of the first image, a pixel in the second image corresponding to each pixel of the first image, and a candidate disparity value set of each pixel of the first image, matching energy E of each candidate disparity value in the candidate disparity value set of each pixel of the first imaged(p,di) (ii) a Wherein p denotes a pixel p, being a pixel of the first image corresponding to the set of candidate disparity values; d isiAn i-th candidate disparity value, i =1, … …, k, representing said pixel p; the k is a total number of candidate disparity values in the set of candidate disparity values for the pixel p;
a determination unit for determining a matching energy E for each candidate disparity value of a set of candidate disparity values for each pixel of the first imaged(p,di) Obtaining a parallax value of each pixel in the first image;
and the processing unit is used for combining the parallax value of each pixel in the first image to obtain the parallax map.
33. The apparatus of claim 32,
the computing unit is specifically configured to utilize a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msub> <mi>I</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msub> <mi>I</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding each candidate disparity value d of the pixel p for a set of candidate disparity valuesiMatching energy E ofd(p,di) (ii) a Wherein the value of the first fitting parameter a and the value of the second fitting parameter b are such that the matching energy Ed(p,di) The value corresponding to the minimum value; w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di) (ii) a The first pixel block ΩpRepresenting a block of pixels of said first image comprising said pixel p; said pixel q being adjacent to said pixel p and belonging to said first block of pixels omegapA pixel of (1); said I1(q) represents the pixel value of pixel q; said I2(q-di) Representing a pixel q-d in said second image to which said pixel q correspondsiA pixel value of (a); said wc(p,q,di) Represents a pixel weight value; said ws(p,q,di) Represents a distance weight value; said wd(p,q,di) Representing a disparity weight value.
34. The apparatus of claim 33,
the pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I1(p)-I1(q)|×|I2(p-di)-I2(q-di)|]Obtaining;
wherein, the I1(p) represents a pixel value of the pixel p; said I2(p-di) Representing a pixel p-d in the second image to which the pixel p correspondsiA pixel value of (a); the first weight coefficient beta1The second weight coefficient beta2The third weight coefficient beta3And the fourth weight coefficient beta4Is a preset value.
35. The apparatus of claim 32,
the computing unit is specifically configured to utilize a formula according to each candidate disparity value in the candidate disparity set of the pixel p
<math> <mrow> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <msup> <mrow> <mo>[</mo> <msubsup> <mi>I</mi> <mn>1</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>a</mi> <mo>&times;</mo> <msubsup> <mi>I</mi> <mn>2</mn> <mo>'</mo> </msubsup> <mrow> <mo>(</mo> <mi>q</mi> <mo>-</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>b</mi> <mo>]</mo> <mo></mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>&Omega;</mi> <mi>p</mi> </msub> </mrow> </msub> <mi>w</mi> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <mi>q</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </math>
Finding each candidate disparity value d of the pixel p for a set of candidate disparity valuesiMatching energy E ofd(p,di);
Wherein, the w (p, q, d)i)=wc(p,q,di)ws(p,q,di)wd(p,q,di);
The pixel weight value wc(p,q,di) Can be according to the formula
wc(p,q,di)=exp[-β1×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
the distance weight value ws(p,q,di) Can be according to the formula
ws(p,q,di)=exp[-β2×(p-q)2]Obtaining;
the parallax weight value wd(p,q,di) Can be according to the formula
wd(p,q,di)=exp[-β3×(p-q)24×|I′1(p)-I′1(q)|×|I′2(p-di)-I′2(q-di)|]Obtaining;
l'1(p)=I1(p)cosθ-I2(p-di)sinθ;
L'2(p-di)=I1(p)sinθ-I2(p-di)cosθ;
L'1(q)=I1(q)cosθ-I2(q-di)sinθ;
L'2(q-di)=I1(q)sinθ-I2(q-di)cosθ;
The adjustment angle θ is a value set in advance to be greater than 0 ° and less than 90 °.
36. The apparatus of any one of claims 32-35,
the determination unit is specifically configured to determine the value of the formula
<math> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>E</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>p</mi> <mo>,</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&Sigma;</mi> <mrow> <mi>p</mi> <mo>&Element;</mo> <mi>I</mi> </mrow> </msub> <msub> <mi>&Sigma;</mi> <mrow> <mi>q</mi> <mo>&Element;</mo> <msub> <mi>N</mi> <mi>p</mi> </msub> </mrow> </msub> <msub> <mi>V</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>d</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </math>
Finding each candidate disparity value d in the set of candidate disparity values for the pixel piCandidate energy E (d) ofi) When the value is the minimum value, determining the corresponding candidate parallax value of each pixel in the first image as the parallax value of each pixel; wherein the I represents the first image; the second pixel block NpRepresenting a block of pixels of said first image comprising said pixel p; the V isp,q(di,dj)=λ×min(|di-dj|,Vmax) (ii) a D isjRepresents the jth candidate disparity value for pixel q, j =1, … …, m; the m is the total number of candidate disparity values in the candidate disparity value set of the pixel q; the smoothing coefficient lambda is a preset value; the maximum value V of the difference between the adjacent pixel parallaxesmaxIs a preset value.
CN201410101591.1A 2014-03-18 2014-03-18 A kind of method and device of high dynamic range images synthesis Active CN104935911B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201410101591.1A CN104935911B (en) 2014-03-18 2014-03-18 A kind of method and device of high dynamic range images synthesis
PCT/CN2014/089071 WO2015139454A1 (en) 2014-03-18 2014-10-21 Method and device for synthesizing high dynamic range image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410101591.1A CN104935911B (en) 2014-03-18 2014-03-18 A kind of method and device of high dynamic range images synthesis

Publications (2)

Publication Number Publication Date
CN104935911A true CN104935911A (en) 2015-09-23
CN104935911B CN104935911B (en) 2017-07-21

Family

ID=54122843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410101591.1A Active CN104935911B (en) 2014-03-18 2014-03-18 A kind of method and device of high dynamic range images synthesis

Country Status (2)

Country Link
CN (1) CN104935911B (en)
WO (1) WO2015139454A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107396082A (en) * 2017-07-14 2017-11-24 歌尔股份有限公司 A kind for the treatment of method and apparatus of view data
CN108028036A (en) * 2015-09-25 2018-05-11 索尼公司 Image processing equipment and image processing method
CN108028893A (en) * 2015-10-21 2018-05-11 高通股份有限公司 Multiple camera auto-focusings are synchronous
US9998720B2 (en) 2016-05-11 2018-06-12 Mediatek Inc. Image processing method for locally adjusting image data of real-time image
CN108184075A (en) * 2018-01-17 2018-06-19 百度在线网络技术(北京)有限公司 For generating the method and apparatus of image
CN108335279A (en) * 2017-01-20 2018-07-27 微软技术许可有限责任公司 Image co-registration and HDR imagings
WO2018209603A1 (en) * 2017-05-17 2018-11-22 深圳配天智能技术研究院有限公司 Image processing method, image processing device, and storage medium
CN109819173A (en) * 2017-11-22 2019-05-28 浙江舜宇智能光学技术有限公司 Depth integration method and TOF camera based on TOF imaging system
CN109842791A (en) * 2019-01-15 2019-06-04 浙江舜宇光学有限公司 A kind of image processing method and device
CN110276714A (en) * 2018-03-16 2019-09-24 虹软科技股份有限公司 Quick scan-type panorama sketch image composition method and device
CN110677558A (en) * 2018-07-02 2020-01-10 华晶科技股份有限公司 Image processing method and electronic device
US10997696B2 (en) 2017-11-30 2021-05-04 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, apparatus and device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170318273A1 (en) * 2016-04-28 2017-11-02 Qualcomm Incorporated Shift-and-match fusion of color and mono images
CN108354435A (en) * 2017-01-23 2018-08-03 上海长膳智能科技有限公司 Automatic cooking apparatus and the method cooked using it
CN112149493B (en) * 2020-07-31 2022-10-11 上海大学 Road elevation measurement method based on binocular stereo vision

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887589A (en) * 2010-06-13 2010-11-17 东南大学 Stereoscopic vision-based real low-texture image reconstruction method
CN102422124A (en) * 2010-05-31 2012-04-18 松下电器产业株式会社 Imaging device, imaging means and program
US20120162366A1 (en) * 2010-12-27 2012-06-28 Dolby Laboratories Licensing Corporation 3D Cameras for HDR

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102779334B (en) * 2012-07-20 2015-01-07 华为技术有限公司 Correction method and device of multi-exposure motion image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102422124A (en) * 2010-05-31 2012-04-18 松下电器产业株式会社 Imaging device, imaging means and program
CN101887589A (en) * 2010-06-13 2010-11-17 东南大学 Stereoscopic vision-based real low-texture image reconstruction method
US20120162366A1 (en) * 2010-12-27 2012-06-28 Dolby Laboratories Licensing Corporation 3D Cameras for HDR

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108028036A (en) * 2015-09-25 2018-05-11 索尼公司 Image processing equipment and image processing method
CN108028893A (en) * 2015-10-21 2018-05-11 高通股份有限公司 Multiple camera auto-focusings are synchronous
CN108028893B (en) * 2015-10-21 2021-03-12 高通股份有限公司 Method and apparatus for performing image autofocus operation
US9998720B2 (en) 2016-05-11 2018-06-12 Mediatek Inc. Image processing method for locally adjusting image data of real-time image
TWI639974B (en) * 2016-05-11 2018-11-01 聯發科技股份有限公司 Image processing method for locally adjusting image data of real-time image
CN108335279A (en) * 2017-01-20 2018-07-27 微软技术许可有限责任公司 Image co-registration and HDR imagings
WO2018209603A1 (en) * 2017-05-17 2018-11-22 深圳配天智能技术研究院有限公司 Image processing method, image processing device, and storage medium
CN107396082B (en) * 2017-07-14 2020-04-21 歌尔股份有限公司 Image data processing method and device
CN107396082A (en) * 2017-07-14 2017-11-24 歌尔股份有限公司 A kind for the treatment of method and apparatus of view data
CN109819173A (en) * 2017-11-22 2019-05-28 浙江舜宇智能光学技术有限公司 Depth integration method and TOF camera based on TOF imaging system
CN109819173B (en) * 2017-11-22 2021-12-03 浙江舜宇智能光学技术有限公司 Depth fusion method based on TOF imaging system and TOF camera
US10997696B2 (en) 2017-11-30 2021-05-04 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, apparatus and device
CN108184075A (en) * 2018-01-17 2018-06-19 百度在线网络技术(北京)有限公司 For generating the method and apparatus of image
CN110276714A (en) * 2018-03-16 2019-09-24 虹软科技股份有限公司 Quick scan-type panorama sketch image composition method and device
CN110276714B (en) * 2018-03-16 2023-06-06 虹软科技股份有限公司 Method and device for synthesizing rapid scanning panoramic image
CN110677558A (en) * 2018-07-02 2020-01-10 华晶科技股份有限公司 Image processing method and electronic device
CN110677558B (en) * 2018-07-02 2021-11-02 华晶科技股份有限公司 Image processing method and electronic device
CN109842791A (en) * 2019-01-15 2019-06-04 浙江舜宇光学有限公司 A kind of image processing method and device
CN109842791B (en) * 2019-01-15 2020-09-25 浙江舜宇光学有限公司 Image processing method and device

Also Published As

Publication number Publication date
WO2015139454A1 (en) 2015-09-24
CN104935911B (en) 2017-07-21

Similar Documents

Publication Publication Date Title
CN104935911B (en) A kind of method and device of high dynamic range images synthesis
US10540806B2 (en) Systems and methods for depth-assisted perspective distortion correction
KR101742120B1 (en) Apparatus and method for image processing
US9445071B2 (en) Method and apparatus generating multi-view images for three-dimensional display
CN106886979A (en) A kind of image splicing device and image split-joint method
EP2704097A2 (en) Depth estimation device, depth estimation method, depth estimation program, image processing device, image processing method, and image processing program
CN102665086B (en) Method for obtaining parallax by using region-based local stereo matching
US20110032341A1 (en) Method and system to transform stereo content
US9025862B2 (en) Range image pixel matching method
US8538135B2 (en) Pulling keys from color segmented images
US20110134109A1 (en) Auto-stereoscopic interpolation
CN111988593B (en) Three-dimensional image color correction method and system based on depth residual optimization
CN108269242B (en) Image enhancement method
JP2005129013A (en) Device and method for image processing
Hervieu et al. Stereoscopic image inpainting: distinct depth maps and images inpainting
CN103945207B (en) A kind of stereo-picture vertical parallax removing method based on View Synthesis
CN114998320B (en) Method, system, electronic device and storage medium for visual saliency detection
CN111369660A (en) Seamless texture mapping method for three-dimensional model
CN108109148A (en) Image solid distribution method, mobile terminal
US20130182944A1 (en) 2d to 3d image conversion
CN112640413B (en) Method for displaying a model of the surroundings, control device and vehicle
JP2019184308A (en) Depth estimation device and program, as well as virtual viewpoint video generator and its program
GB2585197A (en) Method and system for obtaining depth data
CN104754320B (en) A kind of 3D JND threshold values computational methods
CN105282534A (en) System and method for embedding stereo imagery

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant