WO2015139454A1 - Procédé et dispositif de synthétisation d'une image de plage dynamique élevée - Google Patents

Procédé et dispositif de synthétisation d'une image de plage dynamique élevée Download PDF

Info

Publication number
WO2015139454A1
WO2015139454A1 PCT/CN2014/089071 CN2014089071W WO2015139454A1 WO 2015139454 A1 WO2015139454 A1 WO 2015139454A1 CN 2014089071 W CN2014089071 W CN 2014089071W WO 2015139454 A1 WO2015139454 A1 WO 2015139454A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
value
hole
candidate
Prior art date
Application number
PCT/CN2014/089071
Other languages
English (en)
Chinese (zh)
Inventor
高山
徐崚峰
区子廉
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2015139454A1 publication Critical patent/WO2015139454A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the present invention relates to the field of image processing, and in particular, to a method and apparatus for high dynamic range image synthesis.
  • the high dynamic range image is an image obtained by fusing a plurality of different exposure time images by adjusting the exposure time of the camera, taking multiple exposure times of the same scene, and using image synthesis technology. Among them, the long-exposure image has clear details of the dark area, and the short-time exposure image has clear details of the bright area. Compared to normal images, high dynamic range images provide more dynamic range and image detail to better reflect the real environment.
  • the existing high dynamic range image synthesis technology is mainly divided into two categories: the first type is single camera high dynamic range image synthesis; the second type is multi camera high dynamic range image synthesis.
  • a multi-camera high dynamic range image synthesis technique first, a plurality of cameras simultaneously capture a plurality of images by simultaneously capturing the same object with different exposure times, and then select two images from the plurality of images, and then, according to the two images. a relationship between the corresponding points, acquiring a disparity map of the two images, and further, synthesizing one of the two images into a virtual image of the angle of view of the other image according to the disparity map and the two photos Finally, a final high dynamic range image is obtained from the virtual image and the image of the other perspective.
  • Embodiments of the present invention provide a method and apparatus for high dynamic range image synthesis to improve the quality of high dynamic range images.
  • an embodiment of the present invention provides a method for high dynamic range image synthesis, comprising: acquiring a first image and a second image; the first image and the second image are different exposures to the same object Obtaining at the same time; performing binocular stereo matching on the first image and the second image to obtain a disparity map; synthesizing the same view angle from the second image according to the disparity map and the first image a virtual view; obtaining a second grayscale image according to the second image, and obtaining a virtual view grayscale image according to the virtual view; and passing the high dynamic range according to the second grayscale image and the virtual view grayscale image Synthesizing an algorithm to obtain a high dynamic range grayscale image; obtaining a high according to the high dynamic range grayscale image, the second grayscale image, the virtual view grayscale image, the second image, and the virtual view Dynamic range image.
  • the method further includes: when synthesizing the virtual view having the same viewing angle as the second image according to the disparity map and the first image, Marking, in the virtual view, a pixel of the occlusion area as a hole pixel; the occlusion area is an area generated by the first image and the second image being photographed at different angles of the same object; or Deriving a disparity map and the first image, synthesizing a virtual view having the same viewing angle as the second image, obtaining a second grayscale image according to the second image, and obtaining a virtual view according to the virtual view Before the grayscale image, the method further includes: marking the noise pixel or the occlusion region in the virtual view as a hole pixel; the noise pixel is generated by a pixel in which the disparity value in the disparity map is calculated incorrectly;
  • the obtaining the virtual view grayscale image according to the virtual view comprises: obtaining a virtual view grayscale image marked with a hole pixel according to the virtual view
  • the pixel according to the first image, and each pixel of the first image Corresponding pixels in the second image and a candidate disparity value set of each pixel of the first image, obtaining each candidate view in the candidate disparity value set of each pixel of the first image
  • the matching energy E d (p,d i ) of the difference includes: using a formula according to each candidate disparity value in the candidate disparity set of the pixel p
  • the P of the pixels is obtained for each candidate disparity value difference d i in the set of candidate visual matching energy E d (p, d i); wherein the first fitting and a second parameter value Quasi
  • the value of the parameter b is a value corresponding to the matching energy E d (p, d i );
  • the w(p, q, d i ) w c (p, q, d i )w s (p,q,d i )w d (p,q,d i );
  • the first pixel block ⁇ p represents one pixel block in the first image including the pixel p;
  • the pixel q is a pixel belonging to the first pixel block ⁇ p adjacent to the pixel p ;
  • the I 1 (q) represents a pixel value of the pixel q;
  • the I 2 (qd i ) represents a corresponding pixel q a pixel value of a pixel q
  • the pixel weight value w c (p, q, d i ) may be according to a formula
  • the distance weight value w s (p, q, d i ) can be according to a formula
  • the disparity weight value w d (p, q, d i ) may be according to a formula
  • I 1 (p) represents a pixel value of the pixel p
  • the I 2 (pd i ) represents a pixel value of the pixel pd i in the second image corresponding to the pixel p
  • a weight coefficient ⁇ 1 , the second weight coefficient ⁇ 2 , the third weight coefficient ⁇ 3 , and the fourth weight coefficient ⁇ 4 are preset values.
  • the pixel according to the first image and each pixel of the first image Corresponding pixels in the second image and a candidate disparity value set of each pixel of the first image, obtaining each candidate view in the candidate disparity value set of each pixel of the first image
  • the matching energy E d (p,d i ) of the difference includes: using a formula according to each candidate disparity value in the candidate disparity set of the pixel p
  • the pixel weight value w c (p, q, d i ) can be according to a formula
  • the distance weight value w s (p, q, d i ) can be according to a formula
  • the disparity weight value w d (p, q, d i ) may be according to a formula
  • the I' 1 (p) I 1 (p)cos ⁇ -I 2 (pd i )sin ⁇ ;
  • the I' 2 (pd i ) I 1 (p) sin ⁇ - I 2 (pd i ) cos ⁇ ;
  • the adjustment angle ⁇ is a value that is set to be greater than 0° and less than 90° in advance.
  • the candidate disparity value set according to each pixel of the first image a matching energy E d (p,d i ) of each of the candidate disparity values, and obtaining a disparity value of each pixel in the first image includes: according to a formula
  • the m is the total number of candidate disparity values in the candidate disparity value set of the pixel q; the smoothing coefficient ⁇ is a preset value; the difference between the parallax differences of the adjacent pixels V max is a preset value.
  • the second grayscale image, the virtual view grayscale image, the second image, and the virtual view, and acquiring the high dynamic range image includes: sequentially using the formula
  • the marking the noise pixel in the virtual view as a hole pixel includes: In the second image, at least two second pixels are determined; the second pixel refers to a pixel having the same pixel value; and the virtual is obtained according to the at least two second pixels in the second image At least two marked pixels in the view; at least two marked pixels in the virtual view are pixels respectively corresponding to the at least two second pixels in the second image in the virtual view; An average pixel value of at least two marked pixels in the virtual view; determining, in sequence, whether a difference between a pixel value of each of the at least two marked pixels in the virtual view and the average pixel value is greater than The noise threshold; if the difference between the pixel value of the marked pixel and the average pixel value is greater than the noise threshold, determining the labeled pixel as a noise pixel, and Sound pixels marked as holes pixels.
  • the acquiring the adjacent pixel of any one of the hole pixels r in the high dynamic range image includes: according to the formula
  • the s represents the pixel r in the high dynamic range image a pixel in the neighborhood ⁇ r ;
  • the I(s) represents a pixel value of the pixel s; and
  • the I 2 (s) represents a pixel of a pixel corresponding to the pixel s in the second image a value;
  • the rs represents a distance between the pixel r and the pixel s;
  • the ⁇ is a preset weight coefficient indicating a distance between the pixel r and the pixel s.
  • the acquiring, by the neighboring pixel of any one of the hole pixels r in the high dynamic range image, The similarity coefficient between adjacent pixels of the first pixel includes: according to the formula
  • the first proportional coefficient ⁇ 1 and the second proportional coefficient ⁇ 2 are in advance a set value; the s represents one of the neighborhoods ⁇ r of the pixel r in the high dynamic range image; the A represents the high dynamic range image; the a' n represents the first The similarity coefficient obtained when calculating the pixel value of the hole pixel.
  • the acquiring, by the neighboring pixel of any one of the hole pixels r in the high dynamic range image, The similarity coefficient between adjacent pixels of the first pixel includes: determining whether the hole pixel r has a first hole pixel; the first hole pixel is a hole having a pixel value in an adjacent hole pixel of the hole pixel r Pixel; if determined The first hole pixel is used, and the similarity coefficient of the first hole pixel is used as the similarity coefficient of the hole pixel r.
  • the k is the total number of candidate disparity values in the candidate disparity value set of the pixel p; according to the first Matching energy E of each candidate disparity value in the candidate disparity value set for each pixel of the image d (p, d i ), obtaining a disparity value of each pixel in the first image; combining the disparity values of each pixel in the first image to obtain the disparity map.
  • the pixel in the second image corresponding to each pixel of the first image, and each pixel corresponding to the first image, and a candidate disparity value set for each pixel of the first image, and obtaining a matching energy E d (p,d i ) of each candidate disparity value in each candidate disparity value set of each pixel of the first image Including: using a formula according to each candidate disparity value in the candidate disparity set of the pixel p
  • the P of the pixels is obtained for each candidate disparity value difference d i in the set of candidate visual matching energy E d (p, d i); wherein the first fitting and a second parameter value Quasi
  • the value of the parameter b is a value corresponding to the matching energy E d (p, d i );
  • the w(p, q, d i ) w c (p, q, d i )w s (p,q,d i )w d (p,q,d i );
  • the first pixel block ⁇ p represents one pixel block in the first image including the pixel p;
  • the pixel q is a pixel belonging to the first pixel block ⁇ p adjacent to the pixel p ;
  • the I 1 (q) represents a pixel value of the pixel q;
  • the I 2 (qd i ) represents a corresponding pixel q a pixel value of a pixel q
  • the pixel weight value w c (p, q, d i ) may be according to a formula
  • the distance weight value w s (p, q, d i ) can be according to a formula
  • the disparity weight value w d (p, q, d i ) may be according to a formula
  • I 1 (p) represents a pixel value of the pixel p
  • the I 2 (pd i ) represents a pixel value of the pixel pd i in the second image corresponding to the pixel p
  • a weight coefficient ⁇ 1 , the second weight coefficient ⁇ 2 , the third weight coefficient ⁇ 3 , and the fourth weight coefficient ⁇ 4 are preset values.
  • the second image is corresponding to each pixel of the first image and each pixel of the first image.
  • a pixel, and a candidate disparity value set for each pixel of the first image, to obtain a matching energy E d of each candidate disparity value in each candidate disparity value set of each pixel of the first image ( p, d i ) includes: using a formula according to each candidate disparity value in the candidate disparity set of the pixel p
  • the pixel weight value w c (p, q, d i ) can be according to a formula
  • the distance weight value w s (p, q, d i ) can be according to a formula
  • the disparity weight value w d (p, q, d i ) may be according to a formula
  • the I' 1 (p) I 1 (p)cos ⁇ -I 2 (pd i )sin ⁇ ;
  • the I' 2 (pd i ) I 1 (p) sin ⁇ - I 2 (pd i ) cos ⁇ ;
  • the adjustment angle ⁇ is a value that is set to be greater than 0° and less than 90° in advance.
  • the candidate according to each pixel of the first image a matching energy E d (p, d i ) of each candidate disparity value in the disparity set, obtaining a disparity value of each pixel in the first image, including: according to a formula
  • the m is the total number of candidate disparity values in the candidate disparity value set of the pixel q; the smoothing coefficient ⁇ is a preset value; the difference between the parallax differences of the adjacent pixels V max is a preset value.
  • an embodiment of the present invention provides a high dynamic range image synthesizing device, including: an acquiring unit, configured to acquire a first image and a second image; and the first image and the second image are differently exposed. Degree of simultaneous shooting of the same object; parallax a processing unit, configured to perform binocular stereo matching on the first image acquired by the acquiring unit and the second image to obtain a disparity map, and a virtual view synthesizing unit, configured to obtain, according to the disparity processing unit a disparity map and the first image acquired by the acquiring unit, synthesizing a virtual view having the same view angle as the second image; and a grayscale extracting unit, configured to obtain, according to the second image acquired by the acquiring unit a grayscale image, and obtaining a virtual view grayscale image according to the virtual view synthesized by the virtual view unit; a high dynamic range fusion unit, configured to obtain the second grayscale image according to the grayscale extraction unit
  • the virtual view grayscale image is obtained by a high dynamic range synthesis
  • the method further includes: a hole pixel processing unit: the hole pixel processing unit, configured to mark the noise pixel or the occlusion region in the virtual view as a hole pixel;
  • the occlusion region is an area generated by the angle at which the first image and the second image are captured by the same object; the noise pixel is generated by a pixel that is incorrectly calculated by the disparity value in the disparity map;
  • a degree extraction unit configured to obtain a virtual view grayscale image marked with a hole pixel according to the virtual view marked with the hole pixel;
  • the high dynamic range fusion unit specifically for using the second grayscale image and the mark a virtual view grayscale image with a hole pixel, and a high dynamic range grayscale image marked with a hole pixel by a high dynamic range synthesis algorithm;
  • the color interpolation unit is specifically configured to use a high dynamic range of the labeled pixel according to the hole a grayscale image, the second grayscale image, the virtual view grayscale image labeled
  • the disparity processing unit includes: an obtaining module, a calculating module, a determining module, and a combining module;
  • the determining module is configured to perform according to each of the first images a matching energy of each of the candidate disparity values of the candidate disparity values of the pixels, to obtain a disparity value of each pixel in the first image; the combining module, configured to: each of the first images The disparity values of the pixels are combined to obtain the disparity map.
  • the calculating module is specifically configured to use, according to the candidate parallax set of the pixel p Difference, using formula
  • the P of the pixels is obtained for each candidate disparity value difference d i in the set of candidate visual matching energy E d (p, d i); wherein the first fitting and a second parameter value Quasi
  • the value of the parameter b is a value corresponding to the matching energy E d (p, d i );
  • the w(p, q, d i ) w c (p, q, d i )w s (p,q,d i )w d (p,q,d i );
  • the first pixel block ⁇ p represents one pixel block in the first image including the pixel p;
  • the pixel q is a pixel belonging to the first pixel block ⁇ p adjacent to the pixel p ;
  • the I 1 (q) represents a pixel value of the pixel q;
  • the I 2 (qd i ) represents a corresponding pixel q a pixel value of a pixel q
  • the pixel weight value w c (p, q, d i ) may be according to a formula
  • the distance weight value w s (p, q, d i ) can be according to a formula
  • the disparity weight value w d (p, q, d i ) may be according to a formula
  • I 1 (p) represents a pixel value of the pixel p
  • the I 2 (pd i ) represents a pixel value of the pixel pd i in the second image corresponding to the pixel p
  • a weight coefficient ⁇ 1 , the second weight coefficient ⁇ 2 , the third weight coefficient ⁇ 3 , and the fourth weight coefficient ⁇ 4 are preset values.
  • the calculating module is specifically configured to use, according to the candidate parallax set of the pixel p Difference, using formula
  • the pixel weight value w c (p, q, d i ) can be according to a formula
  • the distance weight value w s (p, q, d i ) can be according to a formula
  • the disparity weight value w d (p, q, d i ) may be according to a formula
  • the I' 1 (p) I 1 (p)cos ⁇ -I 2 (pd i )sin ⁇ ;
  • the I' 2 (pd i ) I 1 (p) sin ⁇ - I 2 (pd i ) cos ⁇ ;
  • the adjustment angle ⁇ is a value that is set to be greater than 0° and less than 90° in advance.
  • the determining module is specifically used according to a formula
  • the m is the total number of candidate disparity values in the candidate disparity value set of the pixel q; the smoothing coefficient ⁇ is a preset value; the difference between the parallax differences of the adjacent pixels V max is a preset value.
  • the color interpolation unit is specifically used to sequentially use the formula
  • the hole pixel processing unit is specifically configured to determine in the second image At least two second pixels; the second pixel is a pixel having the same pixel value; the hole pixel processing unit is configured to obtain, according to the at least two second pixels in the second image, At least two marked pixels in the virtual view; at least two marked pixels in the virtual view are pixels in the virtual view corresponding to the at least two second pixels in the second image respectively
  • the hole pixel processing unit is configured to acquire an average pixel value of at least two marked pixels in the virtual view; the hole pixel processing unit is specifically configured to sequentially determine at least two tags in the virtual view Whether a difference between a pixel value of each of the marked pixels in the pixel and the average pixel value is greater than the noise threshold; the noise threshold is a preset noise for determining noise
  • the hole pixel processing unit is specifically configured to determine the marked pixel as a noise pixel if a difference between a pixel value
  • the hole pixel processing unit is specifically configured according to a formula
  • the s represents the pixel r in the high dynamic range image a pixel in the neighborhood ⁇ r ;
  • the I(s) represents a pixel value of the pixel s; and
  • the I 2 (s) represents a pixel of a pixel corresponding to the pixel s in the second image a value;
  • the rs represents a distance between the pixel r and the pixel s;
  • the ⁇ is a preset weight coefficient indicating a distance between the pixel r and the pixel s.
  • the hole pixel processing unit is specifically configured according to a formula
  • the first proportional coefficient ⁇ 1 and the second proportional coefficient ⁇ 2 are in advance a set value; the s represents one of the neighborhoods ⁇ r of the pixel r in the high dynamic range image; the A represents the high dynamic range image; the a' n represents the first The similarity coefficient obtained when calculating the pixel value of the hole pixel.
  • the hole pixel processing unit is specifically configured to determine whether the hole pixel r has a first hole pixel
  • the first hole pixel is a hole pixel of a pixel value of the adjacent hole pixel of the hole pixel r; the hole pixel processing unit is specifically configured to determine the first hole pixel.
  • the similarity coefficient of a hole pixel is taken as the similarity coefficient of the hole pixel r.
  • an embodiment of the present invention provides an apparatus, including: an acquiring unit, configured to acquire a first image and a second image; and the first image and the second image are simultaneously captured by the same object;
  • the k is a candidate disparity value of the pixel p a total number of candidate disparity values in the set; the determining unit, configured to be based on the first image Depending on the difference between the set of candidate pixels in each candidate disparity value matching the energy E d (p, d i) , to obtain the first image disparity value for each pixel; a processing unit for the The disparity map of each pixel in the first image is combined to obtain the disparity map.
  • the calculating unit is specifically configured to use a formula according to each candidate disparity value in the candidate disparity set of the pixel p
  • the P of the pixels is obtained for each candidate disparity value difference d i in the set of candidate visual matching energy E d (p, d i); wherein the first fitting and a second parameter value Quasi
  • the value of the parameter b is a value corresponding to the matching energy E d (p, d i );
  • the w(p, q, d i ) w c (p, q, d i )w s (p,q,d i )w d (p,q,d i );
  • the first pixel block ⁇ p represents one pixel block in the first image including the pixel p;
  • the pixel q is a pixel belonging to the first pixel block ⁇ p adjacent to the pixel p ;
  • the I 1 (q) represents a pixel value of the pixel q;
  • the I 2 (qd i ) represents a corresponding pixel q a pixel value of a pixel q
  • the pixel weight value w c (p, q, d i ) may be according to a formula
  • the distance weight value w s (p, q, d i ) can be according to a formula
  • the disparity weight value w d (p, q, d i ) may be according to a formula
  • I 1 (p) represents a pixel value of the pixel p
  • the I 2 (pd i ) represents a pixel value of the pixel pd i in the second image corresponding to the pixel p
  • a weight coefficient ⁇ 1 , the second weight coefficient ⁇ 2 , the third weight coefficient ⁇ 3 , and the fourth weight coefficient ⁇ 4 are preset values.
  • the calculating unit is specifically configured to use a formula according to each candidate disparity value in the candidate disparity set of the pixel p
  • the pixel weight value w c (p, q, d i ) can be according to a formula
  • the distance weight value w s (p, q, d i ) can be according to a formula
  • the disparity weight value w d (p, q, d i ) may be according to a formula
  • the I' 1 (p) I 1 (p)cos ⁇ -I 2 (pd i )sin ⁇ ;
  • the I' 2 (pd i ) I 1 (p) sin ⁇ - I 2 (pd i ) cos ⁇ ;
  • the adjustment angle ⁇ is a value that is set to be greater than 0° and less than 90° in advance.
  • the determining unit is specifically used according to the formula
  • the m is the total number of candidate disparity values in the candidate disparity value set of the pixel q; the smoothing coefficient ⁇ is a preset value; the difference between the parallax differences of the adjacent pixels V max is a preset value.
  • a method and apparatus for high dynamic range image synthesis obtains a first image and a second image with different exposures, and then performs binocular stereo matching on the first image and the second image, Obtaining a disparity map, and then synthesizing a virtual view having the same viewing angle as the second image according to the disparity map and the first image, and then acquiring a second grayscale image according to the second image, and according to the virtual Obtaining a virtual view grayscale image, and obtaining a high dynamic range grayscale image according to the second dynamic grayscale image and the virtual view grayscale image by a high dynamic range synthesis algorithm, and finally according to the high dynamic range grayscale
  • the image, the second grayscale image, the virtual view grayscale image, the second image, and the virtual view acquire a high dynamic range image, such that adjacent pixels are considered in performing virtual view synthesis The relationship between them improves the quality of high dynamic range images.
  • FIG. 1 is a schematic flowchart of a method for high dynamic range image synthesis according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a mapping curve according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a rotation of a coordinate system according to an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart diagram of another method for high dynamic range image synthesis according to an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of determining a noise pixel according to an embodiment of the present invention.
  • FIG. 7 is a schematic flowchart diagram of a method for synthesizing a disparity map according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of functions of a high dynamic range image synthesizing device according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram showing the function of a parallax processing unit of the high dynamic range image synthesizing device shown in FIG. 8;
  • FIG. 10 is a schematic diagram of functions of another high dynamic range image synthesizing device according to an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of functions of a device according to an embodiment of the present invention.
  • the embodiment of the invention provides a method for high dynamic range image synthesis, as shown in FIG. 1 , including:
  • the first image and the second image are simultaneously captured by the same object with different exposure degrees.
  • first image and the second image are corrected images, and only the horizontal direction or the vertical direction is displaced between the first image and the second image.
  • the exposure of the first image may be greater than the exposure of the second image, or the exposure of the second image may be greater than the first image.
  • the specific degree of exposure of the first image and the second image is not limited by the present invention.
  • the binocular stereo matching is a process of observing the parallax and obtaining the three-dimensional information of the object by observing the corresponding pixels from the images of the same object from two perspectives.
  • a method for performing binocular stereo matching on a first image and a second image to obtain a disparity map and a method for obtaining a disparity map of two images, such as WSAD (Weighted Sum of Absolute Differences, The weighted absolute difference sum algorithm, ANCC (Adaptive Normalized Cross-Correlation), and the like may also be the method proposed by the present invention.
  • WSAD Weighted Sum of Absolute Differences
  • ANCC Adaptive Normalized Cross-Correlation
  • the binocular stereo matching algorithm proposed by the present invention is specifically as follows, and includes:
  • the candidate disparity value set includes at least two candidate disparity values.
  • the candidate disparity value corresponds to the depth in the three-dimensional space. Since the depth has a certain range, the candidate disparity value also has a certain range. Each of the values in this range is a disparity candidate value that together constitutes a set of candidate disparity values for one pixel.
  • candidate disparity values in the candidate disparity value set of each pixel in the first image may be the same or different.
  • the invention is not limited thereto.
  • p denotes a pixel p, which is a pixel of a first image corresponding to the set of candidate disparity values.
  • k is the total number of candidate disparity values in the candidate disparity value set of pixel p.
  • the candidate disparity value set of each pixel includes at least two candidate disparity values, k ⁇ 2.
  • the present invention proposes two methods for calculating the matching energy E d (p,d i ) as follows:
  • the first method using a formula according to each candidate disparity value in the candidate disparity set of the pixel p
  • the value of the first fitting parameter a and the value of the second fitting parameter b are values corresponding to the matching energy E d (p, d i ) being the minimum value.
  • w(p,q,d i ) w c (p,q,d i )w s (p,q,d i )w d (p,q,d i ).
  • the first pixel block ⁇ p represents one pixel block containing the pixel p in the first image.
  • the pixel q is a pixel belonging to the first pixel block ⁇ p adjacent to the pixel p .
  • I 1 (q) represents the pixel value of the pixel q.
  • I 2 (qd i ) represents the pixel value of the pixel qd i in the second image corresponding to the pixel q.
  • w c (p, q, d i ) represents a pixel weight value;
  • w s (p, q, d i ) represents a distance weight value;
  • w d (p, q, d i ) represents a parallax weight value.
  • the pixel weight value w c (p, q, d i ) can be according to a formula
  • the distance weight value w s (p, q, d i ) can be based on the formula
  • the parallax weight value w d (p, q, d i ) can be based on the formula
  • I 1 (p) represents a pixel value of the pixel p
  • I 2 (pd i ) represents a pixel value of the pixel pd i in the second image corresponding to the pixel p
  • the first weighting coefficient ⁇ 1 and the second weighting coefficient ⁇ 2 The third weighting factor ⁇ 3 and the fourth weighting factor ⁇ 4 are preset values.
  • the first pixel block ⁇ p represents one pixel block including the pixel p in the first image.
  • the first pixel block may be a 3 neighborhood of the pixel p, or may be a 4 neighborhood of the pixel p, and the first pixel block may be centered on the pixel p or may not be centered on the pixel p, for the first pixel
  • the specific size of the block and the specific location of the pixel p in the first pixel block are not limited in the present invention.
  • I 2 (j) represents the pixel value of any one of the pixels j in the second image
  • I 1 (f) represents the pixel value of the pixel f corresponding to the pixel j in the second image in the first image
  • a and b are A fitting parameter that varies as the pixel position changes. That is to say, the first fitting parameter a and the second fitting parameter b are also different for different pixels.
  • f-d can be used for calculation. Representing a pixel corresponding to any pixel f in the second image in the first image. At this time, d represents a disparity value between the response pixels of any pixel f in the first image with respect to the second image. It should be noted that since the time parallax between the pixel in the first image and the corresponding pixel in the second image is unknown at this time, the actual disparity value is approximated by the candidate disparity value.
  • a plurality of candidate disparity values are set for each pixel in the first image, and the candidate disparity value set of each pixel constitutes a candidate disparity value set of the pixel, and then the pixel is selected.
  • the candidate disparity value of the candidate disparity value set that has the smallest difference between the candidate disparity value and the actual disparity value is used as the calculated disparity value of the pixel. That is to say, the disparity value of the pixel calculated in the embodiment of the present invention is not the actual disparity value of the pixel, but a value of the candidate disparity value set of the pixel that is similar to the actual disparity value.
  • tangent n is a straight line with a slope of tan ⁇ . Since the coordinate system is rotated ⁇ counterclockwise, the slope of the tangent n in the new coordinate system is reduced to tan( ⁇ - ⁇ ).
  • the slope of the tangent n on the new coordinate axis is greatly reduced to become tan ( ⁇ -45°) ⁇ 1.
  • the pixel weight value w c (p, q, d i ) can be based on the formula
  • the distance weight value w s (p, q, d i ) can be based on the formula
  • the parallax weight value w d (p, q, d i ) can be based on the formula
  • I' 1 (p) I 1 (p)cos ⁇ -I 2 (pd i )sin ⁇ ;
  • I' 2 (pd i ) I 1 (p) sin ⁇ -I 2 (pd i )cos ⁇ ;
  • the adjustment angle ⁇ is a value that is set to be greater than 0° and less than 90° in advance.
  • Pixel value weight w c (p,q,d i ) exp[- ⁇ 1 ⁇
  • the candidate disparity value of each pixel in the corresponding first image is determined as The disparity value of each pixel.
  • I represents a first image
  • the second pixel block N p represents a pixel block containing pixels p in the first image
  • V p,q (d i ,d j ) ⁇ min(
  • m is the total number of candidate disparity values in the candidate disparity value set of the pixel q;
  • the smoothing coefficient ⁇ is a value set in advance; the maximum value V max of the difference in parallax between adjacent pixels is a value set in advance.
  • the candidate energy includes two aspects, the first aspect is the sum of the matching energies of each pixel in the first image, and the second aspect is the smoothing energy V p,q of each pixel in the first image.
  • the second pixel block N p may be the same as the first pixel block or may be different from the first pixel block, which is not limited in the present invention.
  • Min(x, y) represents a function that takes the smaller of x and y.
  • , V max ) represents a difference between the candidate disparity value d i of the pixel p in the first image and the candidate disparity value of the pixel q and a preset interval between adjacent pixels The smaller of the maximum time difference.
  • Vmax is a predefined cutoff value, the purpose is to prevent the smoothing energy from being too large, thus affecting the accurate assignment of the parallax between the foreground and the background edge.
  • the smaller the candidate energy the greater the similarity with the second image in the first image. That is, the better the matching between the pixels in the first image and the second image.
  • the disparity value of each pixel of the first image determined in this step is a combination of candidate disparity values of each pixel in the first image, and the obtained candidate energy is the minimum value, corresponding to
  • the candidate disparity value for each pixel in the first image is the disparity value of the pixel.
  • the candidate disparity value of each pixel in the first image can be quickly obtained by using the existing graph cuts method, and it is not necessary to traverse all the candidate disparity values of each pixel at this time.
  • the disparity map is an image obtained by arranging the disparity values of each pixel in the first image in the order in which the original pixels are arranged.
  • the exposure ratio of the first image to the second image is 16:1, the exposure ratio of the first image to the second image is 4:1, and the ratio of the first image to the second image.
  • the error rates of the first method and the second method proposed in the embodiments of the present invention and the existing WSAD algorithm and the ANCC algorithm are compared.
  • Figure 4 shows the comparison results. It can be seen from the figure that the ratio of exposure between the first image and the second image is particularly large, at 16:1, the WSAD and ANCC algorithms are very A large error rate, and the results of the first method and the second method proposed in this embodiment are very accurate. In the case of the other two exposure ratios, the first method and the second method proposed in this embodiment are always superior to the results of WSAD and ANCC.
  • the first method and the second method proposed in the embodiments of the present invention have greatly reduced the error rate in calculating the disparity map, there are still a small portion of the pixels calculated by the disparity value and the actual value.
  • the disparity values differ greatly, so these pixels are treated as the pixels of the disparity map in which the disparity values are calculated incorrectly.
  • the virtual view of an arbitrary angle can be synthesized.
  • the virtual view having the same viewing angle as the second image is synthesized by using the disparity map and the first image.
  • the virtual view having the same viewing angle as the second image may be synthesized by using the first image and the disparity map by using the prior art.
  • the formula can be utilized Find the pixel value of each pixel in the virtual view.
  • I 1 (x, y) represents the pixel value of the pixel in which the abscissa is x and the ordinate is y in the first image
  • d represents the parallax value of the pixel in which the abscissa is x and the ordinate is y in the first image.
  • the pixel value of the pixel corresponding to the pixel in the virtual view The pixel in the first image is displaced in the horizontal direction to a pixel value corresponding to the pixel after the parallax value d is translated.
  • the formula can be utilized Find the pixel value of each pixel in the virtual view.
  • I 1 (x, y) represents the pixel value of the pixel in which the abscissa is x and the ordinate is y in the first image
  • d represents the parallax value of the pixel in which the abscissa is x and the ordinate is y in the first image.
  • the pixel value of the pixel corresponding to the pixel in the virtual view The pixel in the first image is displaced in the vertical direction to a pixel value corresponding to the pixel after the parallax value d is translated.
  • the second grayscale image and the virtual view grayscale image may be acquired by a method of acquiring a grayscale image of the image according to a color image of one image in the prior art.
  • the grayscale image may also be other methods for obtaining a grayscale image by a color image in the prior art, and the present invention is not limited thereto.
  • R represents the red component of any pixel in the color image
  • G represents the green component of the pixel
  • B represents the blue component of the pixel
  • Grey represents the gradation of the pixel corresponding to the pixel in the grayscale image.
  • a high dynamic range grayscale image is obtained by a high dynamic range synthesis algorithm.
  • the high dynamic range synthesis algorithm refers to an algorithm that combines multiple pictures to obtain a high dynamic range image.
  • the second grayscale image and the virtual view grayscale image may be merged to obtain a high dynamic range grayscale by using an existing single camera high dynamic range image synthesis method or a multi-camera high dynamic range image synthesis method.
  • the red, green, and blue colors of the image to be synthesized are separately processed.
  • the synthesis will be required.
  • the gray scale of the image can be processed.
  • this step uses the second image with the color and the virtual view to determine the high dynamic range gray.
  • the red component value, the green component value, and the blue component value of each pixel in the graph is not included in the high dynamic range grayscale image acquired in the previous step.
  • this step includes:
  • the red component values I red (e), the green component values I green (e), and the blue component values I blue (e) of each pixel in the high dynamic range image are obtained.
  • e represents a pixel e in a high dynamic range image
  • I grey (e) represents the pixel value of the pixel corresponding to the pixel e in the high dynamic range gray scale image, a pixel value indicating a pixel corresponding to the pixel e in the second grayscale image, a pixel value representing a pixel corresponding to the pixel e in the virtual view grayscale image; and Representing a red component value, a green component value, and a blue component value of a pixel corresponding to the pixel e in the second image; and The red component value, the green component value, and the blue component value of the pixel corresponding to the pixel e in the virtual view are respectively indicated.
  • ⁇ (e) represents a weight coefficient for adjusting the ratio of the color of the second image to the color of the virtual view when synthesizing the high dynamic range image.
  • ⁇ (e) is a value calculated by the relationship between the second grayscale image, the virtual view grayscale image, and the corresponding pixel on the high dynamic range grayscale image.
  • T2 obtaining, according to a red component value, a green component value, and a blue component value of each pixel in the high dynamic range image, a pixel value of each pixel in the high dynamic range image;
  • the pixel value of each pixel in the high dynamic range image is obtained, and the red component value of one pixel is known in the prior art.
  • the method of obtaining the pixel value of the pixel is the same as the green component value and the blue component value, and the present invention will not be repeated here.
  • T3 Combine the pixel values of each pixel in the high dynamic range image to obtain a high dynamic range image.
  • the high dynamic range image is formed by combining a plurality of pixels in an array, and each pixel can be expressed by a pixel value.
  • a method for high dynamic range image synthesis first obtains a first image and a second image with different exposures, and then performs binocular stereo matching on the first image and the second image to obtain a disparity map, and then And synthesing a virtual view having the same viewing angle from the second image according to the disparity map and the first image, and then acquiring the second gray image according to the second image, and acquiring the virtual view gray image according to the virtual view, and according to the second gray image And the virtual view gray image, through the high dynamic range synthesis algorithm, obtaining a high dynamic range gray image, and finally according to the high dynamic range gray image, the second gray image, the virtual view gray image, the second image, and the virtual view, The high dynamic range image is obtained, so that the quality of the high dynamic range image is improved since the relationship between adjacent pixels is taken into consideration when performing virtual view synthesis.
  • a method for high dynamic range image synthesis provided by an embodiment of the present invention, as shown in FIG. 5, includes:
  • the first image and the second image are simultaneously captured by the same object with different exposure degrees.
  • step 101 For details, refer to step 101, and details are not described herein again.
  • step 102 For details, refer to step 102, and details are not described herein again.
  • the timing of marking the pixels of the occlusion area as the hole pixels may be after synthesizing the virtual view or after synthesizing the virtual view.
  • different steps are performed according to the timing at which the pixel of the occlusion region is marked as a hole pixel. If the pixel of the occlusion area is marked as a hole pixel while synthesizing the virtual view, steps 503a-504a and steps 505-509 are performed; if the pixel of the occlusion area is marked as a hole pixel when the noise pixel is determined after synthesizing the virtual view, Steps 503b-504b and steps 505-509 are performed.
  • the occlusion area is an area generated by the angle at which the first image and the second image are captured by the same object.
  • the virtual view having the same view angle as the second image is the same as step 103, and details are not described herein again.
  • the pixels in the first image cannot be correspondingly matched to the first image.
  • these regions without corresponding pixels are mapped into the virtual image to form an occlusion region.
  • the method of marking the occlusion area as a hole pixel may be that all the pixel values of the pixels corresponding to the noise area are set to a fixed number, such as 1 or 0, in the virtual view; An image having the same view size, the pixel value of the position corresponding to the occlusion area is set to 0, and the pixel value of the position corresponding to the non-occlusion area is set to 1; other methods for marking pixels in the prior art may also be used, and the present invention does not limit.
  • step 103 For details, refer to step 103, and details are not described herein again.
  • the noise pixel is generated by a pixel in which the disparity value in the disparity map is calculated incorrectly.
  • a candidate disparity value is selected from the candidate disparity value set as the disparity value of the pixel, so that the calculated disparity value may have a certain error.
  • the error of a certain pixel exceeds a certain limit, we determine the pixel as the pixel in which the disparity value in the disparity map is calculated incorrectly.
  • the virtual view having the same viewing angle as the second image is synthesized according to the disparity map and the first image, since the disparity image in the disparity map calculates an erroneous image, noise is generated in the synthesized virtual view.
  • the pixels corresponding to these noises are defined as noise pixels.
  • the pixel value of the pixel in the second image and the virtual view thereof there is a corresponding law between the pixel values of the corresponding pixels. For example, when the pixel value of a pixel in the second image is smaller in the pixel values of all pixels of the second image, then the pixel value of the pixel corresponding to the pixel in the virtual image is smaller in the pixel value of all pixels of the virtual image. When the pixel value of a pixel in the second image is larger in the pixel values of all pixels of the second image, then the pixel value of the pixel corresponding to the pixel in the virtual image is larger in the pixel value of all the pixels of the virtual image. .
  • the embodiment of the present invention utilizes this rule to mark pixels that do not conform to this rule as noise pixels.
  • the virtual view includes noise pixels.
  • one pixel in the virtual view corresponds to one pixel in the second image, and the pixel value in the second image is used as the horizontal axis.
  • the pixel value in the virtual view is taken as the vertical axis, and the pixel values of the pixels at the same position in the map are mapped to the map, and the dot matrix in the right axis of FIG. 6 is obtained. It can be observed that most of the points form a smooth increasing curve. A small number of points are far from the mapping curve, and these points are noise. In our algorithm, we first use all the points to estimate the mapping curve, and then calculate the distance from each point to the mapping curve. If the distance is large, the corresponding pixel in the virtual view is determined as a noise pixel.
  • the method of selecting a noise pixel and marking a noise pixel can refer to the following steps:
  • the second pixel refers to a pixel having the same pixel value.
  • all the pixels of the second image are grouped into pixels according to the pixel value, and all the pixels divided into the same group are called second pixels.
  • the at least two marked pixels in the virtual view are pixels respectively corresponding to at least two second pixels in the second image in the virtual view.
  • Q4 Determine, in sequence, whether a difference between a pixel value and an average pixel value of each of the at least two marked pixels in the virtual view is greater than a noise threshold.
  • the pixel is determined as a noise pixel; if the pixel value of a labeled pixel When the difference between the corresponding average pixel values is not greater than a preset noise threshold, it is determined that the pixel is not a noise pixel.
  • the marked pixel is determined as a noise pixel, and the noise pixel is marked as a hole pixel.
  • the method for marking the occlusion region may be the same as or different from the method for marking the noise pixel, and the present invention is not limited thereto.
  • the method for determining and marking the noise pixel may refer to the method for determining and marking the noise pixel in step 504a, and details are not described herein again.
  • the second grayscale image is obtained according to the second image in step 104, and the virtual view grayscale image is obtained according to the virtual view, and details are not described herein again.
  • a hole pixel in a virtual view marked with a hole pixel a pixel corresponding thereto is directly marked as a hole pixel in the virtual view gray image.
  • a high dynamic range grayscale image marked with a hole pixel is obtained by a high dynamic range synthesis algorithm.
  • the high dynamic range gray image is obtained by the high dynamic range synthesis algorithm according to the second gray image and the virtual view gray image in step 105, and details are not described herein again.
  • a hole pixel in a virtual view grayscale image marked with a hole pixel a pixel corresponding thereto is directly marked as a hole pixel in a high dynamic range grayscale image.
  • the processing method of the non-hole pixel refer to the high dynamic range gray image, the second gray image, the virtual view gray image, the second image, and the virtual view in step 106 to obtain a high dynamic range image. This will not be repeated here.
  • the hole pixel in the high dynamic range grayscale image marked with the hole pixel is obtained according to the hole pixel in the virtual view grayscale image marked with the hole pixel, and the virtual view gray marked with the hole pixel is grayed out.
  • the hole pixels in the degree image are obtained according to the virtual view marked with the hole pixel, so the hole pixel in the high dynamic range gray image marked with the hole pixel, the hole pixel in the virtual view gray image marked with the hole pixel and The position of the hole pixels in the virtual view marked with the hole pixels is the same.
  • the high dynamic range gray scale image marked with the hole pixel and the virtual view gray scale image marked with the hole pixel are the same as the position of the hole pixel in the virtual view marked with the hole pixel, this can be selected. Any of the three images is used as a standard, and pixels corresponding to the hole pixels in the image are directly marked as hole pixels in the high dynamic range image.
  • each hole pixel in the high dynamic range image marked with the hole pixel has a first pixel corresponding thereto in the second image.
  • a similarity coefficient between adjacent pixels of any one of the hole pixels r in the high dynamic range image and adjacent pixels of the first pixel is obtained.
  • s represents one of the neighborhoods ⁇ r of the pixel r in the high dynamic range image
  • I(s) represents the pixel value of the pixel s
  • I 2 (s) represents the pixel corresponding to the pixel s in the second image
  • the pixel value; rs represents the distance between the pixel r and the pixel s; ⁇ is a predetermined weighting coefficient indicating the distance between the pixel r and the pixel s.
  • the neighborhood ⁇ r of the pixel r may be a region centered on the pixel r, or may not be a region centered on the pixel r.
  • the present invention is not limited.
  • the similarity coefficient needs to be calculated once. That is, the similarity coefficients for each hole pixel in a high dynamic range image are different.
  • a similarity coefficient between adjacent pixels of any one of the hole pixels r in the high dynamic range image and adjacent pixels of the first pixel is obtained.
  • the first proportional coefficient ⁇ 1 and the second proportional coefficient ⁇ 2 are preset values; s represents one of the neighborhood ⁇ r of the pixel r in the high dynamic range image; A represents a high dynamic range image; n represents the similarity coefficient obtained when the pixel value of the hole pixel is first calculated.
  • the pixel block ⁇ r is a region smaller than the pixel block ⁇ r .
  • the neighborhood ⁇ r of the pixel r may be a region centered on the pixel r, or may not be a region centered on the pixel r.
  • the specific relationship between the neighborhood ⁇ r and the pixel r is not limited by the present invention.
  • a′ n is a value determined by combining the pixel values of each pixel in the high dynamic range image when calculating the first hole pixel, and in order to simplify the calculation, the first hole pixel can be calculated.
  • the determined a'n is stored, and the value can be directly utilized when calculating the pixel value of the hole pixel.
  • C1 and B1 are related to the coefficients of the first half of the formula, so they are related to the pixel s;
  • C2 and B2 are related to the second half of the formula.
  • the coefficients in the second half of the equation have nothing to do with the pixel s, so C2 and B2 are not related to p.
  • C2, B2 is the same, does not require repeated calculations, so that the calculated when the determined first pixel aperture a 'n in the following calculations can be multiplexed is not necessary to calculate .
  • the first proportional coefficient ⁇ 1 is a value larger than the second proportional coefficient ⁇ 2 .
  • the value of the first proportional coefficient ⁇ 1 may be set to 1, and the second proportional coefficient ⁇ 2 is determined to be set to 0.001.
  • the first hole pixel is a hole pixel of the adjacent hole pixel of the hole pixel r that has obtained the pixel value.
  • the similarity coefficient of the first hole pixel is taken as the similarity coefficient of the hole pixel r.
  • the third method is to use the similarity coefficient of the hole pixel in which the pixel value has been calculated around a hole pixel as the similarity coefficient of the hole pixel to simplify the step of calculating the similarity coefficient.
  • first method or the second method can be combined with the third method to calculate the similarity coefficient of each hole pixel in the high dynamic range image.
  • obtaining the pixel value of the hole pixel r includes: according to the formula The pixel value of the hole pixel r is obtained.
  • I(r) represents the pixel value of the hole pixel r
  • I 2 (r) represents the pixel value of the pixel corresponding to the hole pixel r in the second image
  • a n represents the similarity coefficient of the hole pixel r
  • n 0, 1, ... N
  • N is a preset value.
  • the corresponding pixels in the two images represent pixels having the same position in the two images.
  • a method for high dynamic range image synthesis first obtains a first image and a second image with different exposures, and then performs binocular stereo matching on the first image and the second image to obtain a disparity map, and then And synthesing a virtual view having the same viewing angle from the second image according to the disparity map and the first image, and then acquiring the second gray image according to the second image, and acquiring the virtual view gray image according to the virtual view, and according to the second gray image And the virtual view gray image, through the high dynamic range synthesis algorithm, obtaining a high dynamic range gray image, and finally according to the high dynamic range gray image, the second gray image, the virtual view gray image, the second image, and the virtual view, Obtaining a high dynamic range image, in the process of acquiring a high dynamic range image, marking the occlusion area and the noise pixel having a large influence on the picture as a hole pixel, and finally passing the adjacent pixel of the hole pixel and the hole pixel in the second image Correlating
  • An embodiment of the present invention provides a method for synthesizing a disparity map, as shown in FIG. 7, including:
  • the first image and the second image are simultaneously captured by the same object.
  • step 101 For details, refer to step 101, and details are not described herein again.
  • the candidate disparity value set includes at least two candidate disparity values.
  • p represents a pixel p, which is a pixel of a first image corresponding to the candidate disparity value set;
  • k The total number of candidate disparity values in the candidate disparity value set for pixel p.
  • the candidate disparity value set of each pixel includes at least two candidate disparity values, k ⁇ 2.
  • the present invention proposes two methods for calculating the matching energy E(p,d i ) as follows:
  • the first method using a formula according to each candidate disparity value in the candidate disparity set of the pixel p
  • the value of the first fitting parameter a and the value of the second fitting parameter b correspond to a value corresponding to the minimum value of the matching energy E d (p, d i );
  • w(p, q, d i ) w c (p,q,d i )w s (p,q,d i )w d (p,q,d i );
  • the first pixel block ⁇ p represents a pixel block containing the pixel p in the first image;
  • p and q is a pixel adjacent to the pixel belonging to the first block of pixels [Omega] p;
  • I 1 (q) denotes a pixel value of a pixel of q;
  • I 2 (qd i) represents the second pixel corresponding to a pixel q qd i
  • w c (p, q, d i ) represents the pixel weight value;
  • pixel weight value w c (p, q, d i ) may be according to a formula
  • the distance weight value w s (p, q, d i ) can be according to a formula
  • the disparity weight value w d (p, q, d i ) may be according to a formula
  • I 1 (p) represents a pixel value of the pixel p
  • the I 2 (pd i ) represents a pixel value of the pixel pd i in the second image corresponding to the pixel p
  • a weight coefficient ⁇ 1 , the second weight coefficient ⁇ 2 , the third weight coefficient ⁇ 3 , and the fourth weight coefficient ⁇ 4 are preset values.
  • the second method using a formula according to each candidate disparity value in the candidate disparity set of the pixel p
  • the pixel weight value w c (p, q, d i ) can be based on the formula
  • the distance weight value w s (p, q, d i ) can be based on the formula
  • the parallax weight value w d (p, q, d i ) can be based on the formula
  • I' 1 (p) I 1 (p)cos ⁇ -I 2 (pd i )sin ⁇ ;
  • I' 2 (pd i ) I 1 (p) sin ⁇ -I 2 (pd i )cos ⁇ ;
  • the adjustment angle ⁇ is a value that is set to be greater than 0° and less than 90° in advance.
  • the candidate disparity value of each pixel in the corresponding first image is determined as The disparity value of each pixel.
  • I represents a first image
  • the second pixel block N p represents a pixel block containing pixels p in the first image
  • V p,q (d i ,d j ) ⁇ min(
  • m is the total number of candidate disparity values in the candidate disparity value set of the pixel q;
  • the smoothing coefficient ⁇ is a value set in advance; the maximum value V max of the difference in parallax between adjacent pixels is a value set in advance.
  • An embodiment of the present invention provides a method for synthesizing a disparity map, acquiring a first image and a second image, and acquiring a candidate disparity value set of each pixel of the first image, and then according to each pixel of the first image, and Obtaining, in the second image corresponding to each pixel of the first image, a candidate disparity value set of each pixel of the first image, acquiring each candidate in the candidate disparity value set of each pixel of the first image.
  • the matching energy E(p,d i ) of the disparity value is then obtained according to the matching energy E d (p,d i ) of each candidate disparity value in the candidate disparity value set of each pixel of the first image.
  • the disparity value of each pixel in the first image is finally combined with the disparity value of each pixel in the first image to obtain a disparity map, so that when the disparity value of each pixel is calculated, the final result is obtained.
  • the error between the disparity value and the disparity value is greatly reduced, thereby improving the quality of the high dynamic range image.
  • FIG. 8 is a schematic diagram showing the function of a high dynamic range image synthesizing device according to an embodiment of the present invention.
  • the high dynamic range image synthesis apparatus includes an acquisition unit 801, a parallax processing unit 802, a virtual view synthesis unit 803, a grayscale extraction unit 804, a high dynamic range fusion unit 805, and a color interpolation unit 806.
  • the obtaining unit 801 is configured to acquire the first image and the second image.
  • the first image and the second image are simultaneously captured by the same object with different exposure degrees.
  • the disparity processing unit 802 is configured to perform binocular stereo matching on the first image acquired by the acquiring unit 801 and the second image to obtain a disparity map.
  • the disparity processing unit 802 includes: an obtaining module 8021 , a calculating module 8022 , a determining module 8023 , and a combining module 8024 .
  • the obtaining module 8021 is configured to acquire a candidate disparity value set of each pixel of the first image.
  • the candidate disparity value set includes at least two candidate disparity values.
  • the calculating module 8022 is configured to obtain a first image according to each pixel of the first image, a pixel in the second image corresponding to each pixel of the first image, and a candidate disparity value set of each pixel of the first image.
  • p denotes a pixel p, which is a pixel of a first image corresponding to the set of candidate disparity values.
  • k is the total number of candidate disparity values in the candidate disparity value set of pixel p.
  • the calculating module 8022 acquires the matching energy E d (p,d i ) of each of the candidate disparity values of each pixel of the first image by the following two methods:
  • the first method, the calculation module 8022 is specifically configured to use a formula according to each candidate disparity value in the candidate disparity set of the pixel p.
  • pixel weight value w c (p, q, d i ) may be according to a formula
  • the distance weight value w s (p, q, d i ) can be according to a formula
  • the disparity weight value w d (p, q, d i ) may be according to a formula
  • I 1 (p) represents a pixel value of the pixel p
  • the I 2 (pd i ) represents a pixel value of the pixel pd i in the second image corresponding to the pixel p
  • a weight coefficient ⁇ 1 , the second weight coefficient ⁇ 2 , the third weight coefficient ⁇ 3 , and the fourth weight coefficient ⁇ 4 are preset values.
  • the second method, the calculating module 8022, is specifically configured to use a formula according to each candidate disparity value in the candidate disparity set of the pixel p
  • the pixel weight value w c (p, q, d i ) can be based on the formula
  • the distance weight value w s (p, q, d i ) can be based on the formula
  • the parallax weight value w d (p, q, d i ) can be based on the formula
  • I' 1 (p) I 1 (p)cos ⁇ -I 2 (pd i )sin ⁇ ;
  • I' 2 (pd i ) I 1 (p) sin ⁇ -I 2 (pd i )cos ⁇ ;
  • the adjustment angle ⁇ is a value that is set to be greater than 0° and less than 90° in advance.
  • a determining module 8023 configured to obtain a view of each pixel in the first image according to a matching energy E d (p, d i ) of each of the candidate disparity values of each pixel of the first image. Difference.
  • the determining module 8023 is specifically configured according to a formula
  • the candidate disparity value of each pixel in the corresponding first image is determined as The disparity value of each pixel.
  • I represents a first image
  • the second pixel block N p represents a pixel block containing pixels p in the first image
  • V p,q (d i ,d j ) ⁇ min(
  • m is the total number of candidate disparity values in the candidate disparity value set of the pixel q;
  • the smoothing coefficient ⁇ is a value set in advance; the maximum value V max of the difference in parallax between adjacent pixels is a value set in advance.
  • the combining module 8024 is configured to combine the disparity values of each pixel in the first image to obtain a disparity map.
  • the virtual view synthesizing unit 803 is configured to synthesize the disparity map obtained by the disparity processing unit 802 and the first image acquired by the acquiring unit 801, and synthesize a virtual view having the same viewing angle as the second image.
  • the gradation extraction unit 804 is configured to obtain a second grayscale image according to the second image acquired by the acquiring unit 801, and obtain a virtual view grayscale image according to the virtual view synthesized by the virtual view unit 803.
  • the grayscale extracting unit 804 is specifically configured to acquire a virtual view grayscale image marked with a hole pixel according to the virtual view marked with the hole pixel.
  • the high dynamic range fusion unit 805 is configured to obtain a high dynamic range grayscale image by using a high dynamic range synthesis algorithm according to the second grayscale image and the virtual view grayscale image obtained by the grayscale extraction unit 804.
  • the high dynamic range fusion unit 805 is specifically configured to obtain a high dynamic range grayscale image marked with a hole pixel by using a high dynamic range synthesis algorithm according to the second grayscale image and the virtual view grayscale image marked with the hole pixel. .
  • the color interpolation unit 806 is configured to obtain a high dynamic range image according to the high dynamic range grayscale image, the second grayscale image, the virtual view grayscale image, the second image, and the virtual view.
  • color interpolation unit 806 is specifically configured to sequentially use the formula
  • the red component values I red (e), the green component values I green (e), and the blue component values I blue (e) of each pixel in the high dynamic range image are obtained.
  • e represents a pixel e in a high dynamic range image
  • I grey (e) represents the pixel value of the pixel corresponding to the pixel e in the high dynamic range gray scale image, a pixel value indicating a pixel corresponding to the pixel e in the second grayscale image, a pixel value representing a pixel corresponding to the pixel e in the virtual view grayscale image; and Representing a red component value, a green component value, and a blue component value of a pixel corresponding to the pixel e in the second image; and The red component value, the green component value, and the blue component value of the pixel corresponding to the pixel e in the virtual view are respectively indicated.
  • the color interpolation unit 806 is specifically configured to acquire pixel values of each pixel in the high dynamic range image according to the red component value, the green component value, and the blue component value of each pixel in the high dynamic range image.
  • the color interpolation unit 806 is specifically configured to combine the pixel values of each pixel in the high dynamic range image to obtain a high dynamic range image.
  • the color interpolation unit 806 is specifically configured to: according to the high dynamic range grayscale image marked with the hole pixel, the second grayscale image, the virtual view grayscale image marked with the hole pixel, the second image, and the pixel marked with the hole pixel A virtual view that yields a high dynamic range image labeled with hole pixels.
  • the high dynamic range image synthesizing device further includes: a hole pixel processing unit 807.
  • the hole pixel processing unit 807 is configured to mark the noise pixel or the occlusion area in the virtual view as a hole pixel.
  • the occlusion region is an area generated by the angle at which the first image and the second image are captured by the same object; the noise pixel is generated by a pixel whose erroneous calculation is incorrect in the disparity map.
  • the hole pixel processing unit 807 is specifically configured to determine at least two second pixels in the second image.
  • the second pixel refers to a pixel having the same pixel value.
  • the hole pixel processing unit 807 is specifically configured to obtain at least two marked pixels in the virtual view according to at least two second pixels in the second image.
  • the at least two marked pixels in the virtual view are pixels respectively corresponding to at least two second pixels in the second image in the virtual view.
  • the hole pixel processing unit 807 is specifically configured to acquire an average pixel value of at least two marked pixels in the virtual view.
  • the hole pixel processing unit 807 is specifically configured to sequentially determine whether a difference between a pixel value and an average pixel value of each of the at least two marked pixels in the virtual view is greater than a noise threshold.
  • the noise threshold is a value that is set in advance to determine noise.
  • the hole pixel processing unit 807 is specifically configured to determine the marked pixel as a noise pixel and mark the noise pixel as a hole pixel if the difference between the pixel value of the marked pixel and the average pixel value is greater than the noise threshold.
  • the hole pixel processing unit 807 is further configured to determine, in the second image, the first pixel corresponding to each hole pixel in the high dynamic range image marked with the hole pixel.
  • the hole pixel processing unit 807 is further configured to acquire a similarity coefficient between adjacent pixels of each hole pixel in the high dynamic range image and adjacent pixels of the first pixel; and obtain high dynamics according to the similarity coefficient and the first pixel A pixel value of each hole pixel in at least one of the hole pixels in the range image.
  • the hole pixel processing unit 807 acquires adjacent pixels of each hole pixel in the high dynamic range image, and the similarity coefficient between the adjacent pixels of the first pixel may have the following three methods:
  • the first method, the hole pixel processing unit 807, is specifically used according to the formula
  • a similarity coefficient between adjacent pixels of any one of the hole pixels r in the high dynamic range image and adjacent pixels of the first pixel is obtained.
  • s represents one of the neighborhoods ⁇ r of the pixel r in the high dynamic range image
  • I(s) represents the pixel value of the pixel s
  • I 2 (s) represents the pixel corresponding to the pixel s in the second image
  • the pixel value; rs represents the distance between the pixel r and the pixel s; ⁇ is a predetermined weighting coefficient indicating the distance between the pixel r and the pixel s.
  • the second method, the hole pixel processing unit 807, is specifically used according to the formula
  • a similarity coefficient between adjacent pixels of any one of the hole pixels r in the high dynamic range image and adjacent pixels of the first pixel is obtained.
  • the first proportional coefficient ⁇ 1 and the second proportional coefficient ⁇ 2 are preset values; s represents one of the neighborhood ⁇ r of the pixel r in the high dynamic range image; A represents a high dynamic range image; n represents the similarity coefficient obtained when the pixel value of the hole pixel is first calculated.
  • the hole pixel processing unit 807 is specifically configured to determine whether the hole pixel r has a first hole pixel; in the case where the first hole pixel is determined, the similarity coefficient of the first hole pixel is used as the similarity of the hole pixel r coefficient.
  • the first hole pixel is a hole pixel of the adjacent hole pixel of the hole pixel r that has obtained the pixel value.
  • the hole pixel processing unit 807 is specifically configured according to a formula The pixel value of the hole pixel r is obtained.
  • I(r) represents the pixel value of the hole pixel r
  • I 2 (r) represents the pixel value of the pixel corresponding to the hole pixel r in the second image
  • a n represents the similarity coefficient of the hole pixel r
  • n 0, 1, ... N
  • N is a preset value.
  • a high dynamic range image synthesizing device first obtains a first image and a second image with different exposure degrees, and then performs binocular stereo matching on the first image and the second image to obtain a disparity map, and then according to Forming a parallax map and the first image, synthesizing a virtual view having the same viewing angle as the second image, and then acquiring the second grayscale image according to the second image, and acquiring the virtual view grayscale image according to the virtual view, and according to the second grayscale image and
  • the virtual view gray image is obtained by the high dynamic range synthesis algorithm to obtain a high dynamic range gray image, and finally obtained according to the high dynamic range gray image, the second gray image, the virtual view gray image, the second image, and the virtual view.
  • a high dynamic range image in the process of acquiring a high dynamic range image, the occlusion area and the noise pixel having a large influence on the picture are marked as hole pixels, and finally the adjacent pixels passing through the hole pixel correspond to the hole pixels in the second image.
  • the relationship between adjacent pixels of the pixel, and the pixel between the hole and the corresponding pixel in the second image is estimated Relationship, and then find the pixel values of the pixels of the hole. In this way, since the relationship between adjacent pixels is considered in the virtual view synthesis, and the occlusion area and the noise pixels are further processed, the quality of the high dynamic range image is improved.
  • FIG. 11 is a schematic diagram of functions of a device according to an embodiment of the present invention.
  • the device includes an acquisition unit 1101, a calculation unit 1102, a determination unit 1103, and a processing unit 1104.
  • the acquiring unit 1101 is configured to acquire the first image and the second image.
  • first image and the second image are simultaneously captured by the same object
  • the obtaining unit 1101 is further configured to acquire a candidate disparity value set of each pixel of the first image.
  • the candidate disparity value set includes at least two candidate disparity values.
  • the calculating unit 1102 is configured to obtain a first image according to each pixel of the first image, a pixel in the second image corresponding to each pixel of the first image, and a candidate disparity value set of each pixel of the first image.
  • p represents a pixel p, which is a pixel of a first image corresponding to the candidate disparity value set;
  • k The total number of candidate disparity values in the candidate disparity value set for pixel p.
  • the calculating unit 1102 obtains the matching energy E(p,d i ) of each candidate disparity value in each candidate disparity value set of each pixel of the first image by the following two methods:
  • the first method, the calculating unit 1102 is specifically configured to use a formula according to each candidate disparity value in the candidate disparity set of the pixel p
  • the value of the first fitting parameter a and the value of the second fitting parameter b correspond to a value corresponding to the minimum value of the matching energy E d (p, d i );
  • w(p, q, d i ) w c (p,q,d i )w s (p,q,d i )w d (p,q,d i );
  • the first pixel block ⁇ p represents a pixel block containing the pixel p in the first image;
  • p and q is a pixel adjacent to the pixel belonging to the first block of pixels [Omega] p;
  • I 1 (q) denotes a pixel value of a pixel of q;
  • I 2 (qd i) represents the second pixel corresponding to a pixel q qd i
  • w c (p, q, d i ) represents the pixel weight value;
  • the pixel weight value w c (p, q, d i ) can be according to a formula
  • the distance weight value w s (p, q, d i ) can be based on the formula
  • the parallax weight value w d (p, q, d i ) can be based on the formula
  • I 1 (p) represents a pixel value of the pixel p
  • I 2 (pd i ) represents a pixel value of the pixel pd i in the second image corresponding to the pixel p
  • the first weighting coefficient ⁇ 1 and the second weighting coefficient ⁇ 2 The third weighting factor ⁇ 3 and the fourth weighting factor ⁇ 4 are preset values.
  • the second method, the calculating unit 1102 is specifically configured to use a formula according to each candidate disparity value in the candidate disparity set of the pixel p
  • the pixel weight value w c (p, q, d i ) can be based on the formula
  • the distance weight value w s (p, q, d i ) can be based on the formula
  • the parallax weight value w d (p, q, d i ) can be based on the formula
  • I' 1 (p) I 1 (p)cos ⁇ -I 2 (pd i )sin ⁇ ;
  • I' 2 (pd i ) I 1 (p) sin ⁇ -I 2 (pd i )cos ⁇ ;
  • the adjustment angle ⁇ is a value that is set to be greater than 0° and less than 90° in advance.
  • a determining unit 1104 configured to obtain a view of each pixel in the first image according to a matching energy E d (p, d i ) of each candidate disparity value in each candidate disparity value set of each pixel of the first image. Difference.
  • determining unit 1104 is specifically configured according to a formula
  • the candidate disparity value of each pixel in the corresponding first image is determined as The disparity value of each pixel.
  • I represents a first image
  • the second pixel block N p represents a pixel block containing pixels p in the first image
  • V p,q (d i ,d j ) ⁇ min(
  • m is the total number of candidate disparity values in the candidate disparity value set of the pixel q;
  • the smoothing coefficient ⁇ is a value set in advance; the maximum value V max of the difference in parallax between adjacent pixels is a value set in advance.
  • the processing unit 1105 is configured to combine the disparity values of each pixel in the first image to obtain a disparity map.
  • An embodiment of the present invention provides an apparatus for acquiring a first image and a second image, and acquiring a candidate disparity value set of each pixel of the first image, and then, according to each pixel of the first image, and the first image.
  • Acquiring a pixel in the second image corresponding to each pixel, and a candidate disparity value set of each pixel of the first image acquiring each candidate disparity value in the candidate disparity value set of each pixel of the first image Matching the energy E(p,d i ), and then acquiring the first image according to the matching energy E d (p,d i ) of each of the candidate disparity values of each pixel of the first image
  • the disparity value of each pixel is finally combined to obtain the disparity map by combining the disparity values of each pixel in the first image, so that the final disparity value is obtained when calculating the disparity value of each pixel.
  • the error between the disparity value and the disparity value is greatly reduced, thereby improving the quality of the high dynamic range image
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or Not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be physically included separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the above-described integrated unit implemented in the form of a software functional unit can be stored in a computer readable storage medium.
  • the software functional units described above are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform portions of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, and the program code can be stored. Medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé et un dispositif de synthétisation d'une image de plage dynamique élevée, appartenant au domaine du traitement d'image, et utilisés pour améliorer la qualité d'une image de plage dynamique élevée. Le procédé consiste à : acquérir une première image et une seconde image ; exécuter une adaptation stéréographique binoculaire sur la première image et la seconde image de sorte à obtenir une image de disparité ; d'après l'image de disparité et la première image, synthétiser une vue virtuelle ayant le même angle de visualisation que la seconde image ; d'après la seconde image, obtenir une seconde image sur l'échelle des gris, et d'après la vue virtuelle, obtenir une vue virtuelle d'une image sur l'échelle des gris ; d'après la seconde image sur l'échelle des gris et la vue virtuelle de l'image sur l'échelle des gris, obtenir une image de plage dynamique élevée sur l'échelle des gris au moyen d'un algorithme de synthèse de plage dynamique élevée ; et d'après l'image de plage dynamique élevée sur l'échelle des gris, la seconde image sur l'échelle des gris, la vue virtuelle de l'image sur l'échelle des gris, la seconde image et la vue virtuelle, obtenir une image de plage dynamique élevée. La présente invention est adaptée à un scénario de synthétisation d'une image de plage dynamique élevée.
PCT/CN2014/089071 2014-03-18 2014-10-21 Procédé et dispositif de synthétisation d'une image de plage dynamique élevée WO2015139454A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410101591.1A CN104935911B (zh) 2014-03-18 2014-03-18 一种高动态范围图像合成的方法及装置
CN201410101591.1 2014-03-18

Publications (1)

Publication Number Publication Date
WO2015139454A1 true WO2015139454A1 (fr) 2015-09-24

Family

ID=54122843

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/089071 WO2015139454A1 (fr) 2014-03-18 2014-10-21 Procédé et dispositif de synthétisation d'une image de plage dynamique élevée

Country Status (2)

Country Link
CN (1) CN104935911B (fr)
WO (1) WO2015139454A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017189102A1 (fr) * 2016-04-28 2017-11-02 Qualcomm Incorporated Exécution d'égalisation d'intensité par rapport à des images monochrome et couleur
CN108354435A (zh) * 2017-01-23 2018-08-03 上海长膳智能科技有限公司 自动烹调设备与利用其进行烹调的方法
CN112149493A (zh) * 2020-07-31 2020-12-29 上海大学 基于双目立体视觉的道路高程测量方法

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3355300B1 (fr) * 2015-09-25 2021-09-29 Sony Group Corporation Dispositif de traitement d'images et procédé de traitement d'images
US10097747B2 (en) * 2015-10-21 2018-10-09 Qualcomm Incorporated Multiple camera autofocus synchronization
US9998720B2 (en) * 2016-05-11 2018-06-12 Mediatek Inc. Image processing method for locally adjusting image data of real-time image
CN108335279B (zh) * 2017-01-20 2022-05-17 微软技术许可有限责任公司 图像融合和hdr成像
WO2018209603A1 (fr) * 2017-05-17 2018-11-22 深圳配天智能技术研究院有限公司 Procédé de traitement d'image, dispositif de traitement d'image et support d'informations
CN107396082B (zh) * 2017-07-14 2020-04-21 歌尔股份有限公司 一种图像数据的处理方法和装置
CN109819173B (zh) * 2017-11-22 2021-12-03 浙江舜宇智能光学技术有限公司 基于tof成像系统的深度融合方法和tof相机
CN107948519B (zh) 2017-11-30 2020-03-27 Oppo广东移动通信有限公司 图像处理方法、装置及设备
CN108184075B (zh) * 2018-01-17 2019-05-10 百度在线网络技术(北京)有限公司 用于生成图像的方法和装置
CN110276714B (zh) * 2018-03-16 2023-06-06 虹软科技股份有限公司 快速扫描式全景图图像合成方法及装置
TWI684165B (zh) * 2018-07-02 2020-02-01 華晶科技股份有限公司 影像處理方法與電子裝置
CN109842791B (zh) * 2019-01-15 2020-09-25 浙江舜宇光学有限公司 一种图像处理方法及装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887589A (zh) * 2010-06-13 2010-11-17 东南大学 一种基于立体视觉的实拍低纹理图像重建方法
CN102422124A (zh) * 2010-05-31 2012-04-18 松下电器产业株式会社 成像装置、成像方法及程序
CN102779334A (zh) * 2012-07-20 2012-11-14 华为技术有限公司 一种多曝光运动图像的校正方法及装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9210322B2 (en) * 2010-12-27 2015-12-08 Dolby Laboratories Licensing Corporation 3D cameras for HDR

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102422124A (zh) * 2010-05-31 2012-04-18 松下电器产业株式会社 成像装置、成像方法及程序
CN101887589A (zh) * 2010-06-13 2010-11-17 东南大学 一种基于立体视觉的实拍低纹理图像重建方法
CN102779334A (zh) * 2012-07-20 2012-11-14 华为技术有限公司 一种多曝光运动图像的校正方法及装置

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017189102A1 (fr) * 2016-04-28 2017-11-02 Qualcomm Incorporated Exécution d'égalisation d'intensité par rapport à des images monochrome et couleur
WO2017189104A1 (fr) * 2016-04-28 2017-11-02 Qualcomm Incorporated Masque de fusion permettant de créer un effet de parallaxe des images couleur et monochrome pour la macrophotographie
US10341543B2 (en) 2016-04-28 2019-07-02 Qualcomm Incorporated Parallax mask fusion of color and mono images for macrophotography
US10362205B2 (en) 2016-04-28 2019-07-23 Qualcomm Incorporated Performing intensity equalization with respect to mono and color images
CN108354435A (zh) * 2017-01-23 2018-08-03 上海长膳智能科技有限公司 自动烹调设备与利用其进行烹调的方法
CN112149493A (zh) * 2020-07-31 2020-12-29 上海大学 基于双目立体视觉的道路高程测量方法
CN112149493B (zh) * 2020-07-31 2022-10-11 上海大学 基于双目立体视觉的道路高程测量方法

Also Published As

Publication number Publication date
CN104935911A (zh) 2015-09-23
CN104935911B (zh) 2017-07-21

Similar Documents

Publication Publication Date Title
WO2015139454A1 (fr) Procédé et dispositif de synthétisation d'une image de plage dynamique élevée
US10540806B2 (en) Systems and methods for depth-assisted perspective distortion correction
US11830141B2 (en) Systems and methods for 3D facial modeling
US20190244379A1 (en) Systems and Methods for Depth Estimation Using Generative Models
US10853625B2 (en) Facial signature methods, systems and software
CN106981078B (zh) 视线校正方法、装置、智能会议终端及存储介质
WO2020007320A1 (fr) Procédé de fusion d'images à plusieurs angles de vision, appareil, dispositif informatique, et support de stockage
WO2012114639A1 (fr) Dispositif d'affichage d'objet, procédé d'affichage d'objet, et programme d'affichage d'objet
EP2064675A1 (fr) Procédé pour déterminer une carte de profondeur à partir d'images, dispositif pour déterminer une carte de profondeur
AU2020203790B2 (en) Transformed multi-source content aware fill
JP2010140097A (ja) 画像生成方法、画像認証方法、画像生成装置、画像認証装置、プログラム、および記録媒体
WO2018225518A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image, programme, et système de télécommunication
KR101853269B1 (ko) 스테레오 이미지들에 관한 깊이 맵 스티칭 장치
EP3446283A1 (fr) Procédé et dispositif d'assemblage d'images
WO2021003807A1 (fr) Procédé et dispositif d'estimation de profondeur d'image, appareil électronique et support de stockage
KR20190044439A (ko) 스테레오 이미지들에 관한 깊이 맵 스티칭 방법
TWI757658B (zh) 影像處理系統及影像處理方法
DK3189493T3 (en) PERSPECTIVE CORRECTION OF DIGITAL PHOTOS USING DEPTH MAP
JPH05303629A (ja) 形状合成方法
JP2007053621A (ja) 画像生成装置
JP2011165081A (ja) 画像生成方法、画像生成装置、及びプログラム
TW202029056A (zh) 來自廣角影像的像差估計
CN107403448B (zh) 代价函数生成方法和代价函数生成装置
GB2585197A (en) Method and system for obtaining depth data
CN105282534A (zh) 用于嵌入立体图像的系统及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14886281

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14886281

Country of ref document: EP

Kind code of ref document: A1