WO2015196456A1 - 基于拜尔颜色滤波阵列的高动态范围视频录制方法和装置 - Google Patents

基于拜尔颜色滤波阵列的高动态范围视频录制方法和装置 Download PDF

Info

Publication number
WO2015196456A1
WO2015196456A1 PCT/CN2014/080967 CN2014080967W WO2015196456A1 WO 2015196456 A1 WO2015196456 A1 WO 2015196456A1 CN 2014080967 W CN2014080967 W CN 2014080967W WO 2015196456 A1 WO2015196456 A1 WO 2015196456A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image frame
value
double
column
Prior art date
Application number
PCT/CN2014/080967
Other languages
English (en)
French (fr)
Inventor
曹子晟
俞利富
钟文辉
王铭钰
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2014/080967 priority Critical patent/WO2015196456A1/zh
Priority to JP2016531711A priority patent/JP6090820B2/ja
Publication of WO2015196456A1 publication Critical patent/WO2015196456A1/zh
Priority to US15/386,630 priority patent/US9858644B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4015Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • H01L27/14601Structural or functional details thereof
    • H01L27/1462Coatings
    • H01L27/14621Colour filter arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/581Control of the dynamic range involving two or more exposures acquired simultaneously
    • H04N25/583Control of the dynamic range involving two or more exposures acquired simultaneously with different integration times

Definitions

  • the present invention relates to the field of high dynamic range (hereinafter referred to as HDR) video recording technology, and more particularly to a high dynamic range video recording method and apparatus based on a Bayer color filter array.
  • HDR high dynamic range
  • CMOS Complementary Metal Oxide Semiconductor
  • a sensor with a narrow dynamic range records a scene with a wide dynamic range, requiring multiple imaging. Take a 100dB scene as an example. You can increase the shutter speed first, take a 0 ⁇ 60dB underexposed photo, lower the shutter speed, take a 40 ⁇ 100dB overexposed photo, and finally merge the two photos into one and re Calculate the grayscale mapping relationship.
  • CMOS sensors are typically a color filter array structure, and the image captured by the Bayer filter array is referred to as Bayer.
  • Each pixel records 10 to 14 bits of monochrome information, and the RGB three primary color information needs to be calculated by interpolation of the pixel and surrounding pixels.
  • the existing methods of shooting HDR video mainly include two key technical points, multi-exposure frame ⁇ and HDR frame merging algorithm.
  • Multi-exposure frame sets are high-speed continuous shooting with different exposure values to obtain multi-frame pictures.
  • high-speed continuous shooting requires a very high frame rate, which limits the lower shutter limit for video capture.
  • the HDR algorithm first estimates the brightness response function of the camera based on multiple exposure frames, and then uses gray
  • the order map calculates the new gray scale table and finally calculates the new HDR image. Since the camera's brightness response function usually requires parameter estimation for all gray levels, the computational complexity is acceptable in 8-bit images (256 gray levels), but for Bayer charts (14 bits), the calculation is too large, so it cannot Apply directly to HDR video recording. Weighted averaging is another common method of frame merging. Taking two frames of pictures as an example, the combined pixel value p new can be calculated by equation (1):
  • a number between 0 and 1 that represents the weight of pixel 1 in the merged pixel A number between 0 and 1 that represents the weight of pixel 1 in the merged pixel.
  • Conventional methods generally consider the over- and under-exposure of pixels when assigning weights, and usually set a threshold to detect anomalies in exposure.
  • the pixel weight of over or under exposure will be much lower than the normal pixel value. Taking an 8-bit image as an example, use (2) to calculate the weight:
  • Ding 2 is the underexposure and overexposure thresholds respectively. This simple distinction between overexposure and underexposure is not good for scenes, and artifacts are prone to occur, and unnatural transitions occur when pixels are merged.
  • the technical problem to be solved by the present invention is to provide a high dynamic range video recording method and apparatus based on a Bayer color filter array, which can overcome the problem of high speed motion blur and reduce the frame rate of high speed continuous shooting.
  • an embodiment of the present invention provides a high dynamic range video recording method based on a Bayer color filter array, including:
  • Exposing according to the different photosensitive time of the parity double column configuration obtaining an image frame of different exposure values of the parity double column, wherein the odd double column is the total number of columns of the image frame divided by 4 and the remaining 1 column, and the even double column is The total number of columns of the image frame is divided by 4 and the remaining 2 and the remaining 3 columns;
  • the image frame is decomposed into an underexposed image frame and an overexposed image frame, wherein the underexposure double column and the missing double column are sequentially spaced apart in the underexposed image frame, and the overexposed double column and the missing double column in the overexposed image frame are sequentially spaced apart;
  • the pixel recovery values of the missing double-column pixels in the underexposed image frame are obtained as the pixel values of the corresponding pixel points according to the pixel values of the underexposed double-column pixels on the red, green and blue channels respectively;
  • a frame, respectively, on the red, green and blue channels, according to the pixel values of the overexposed double-column pixels, the pixel recovery values of the missing double-column pixels in the over-exposed image frame are obtained as the pixel values of the corresponding pixel points; according to the under-exposed image frame and over-exposure
  • the pixel values of the pixels on the red, green and blue channels in the image frame are combined with the overexposed
  • the step of acquiring the pixel recovery value of the missing double-column pixel in the under-exposed image frame as the pixel value of the corresponding pixel in the red-green-blue channel according to the pixel value of the under-exposed double-column pixel respectively includes: Exposing pixel values of double-column pixels to calculate pixel estimation values of missing double-column pixels; and obtaining pixel recovery values of missing pixels on the green channel by interpolation;
  • the pixel recovery value on the red-blue channel is obtained by adding the difference recovery value of the missing pixel on the red/blue channel to the restored pixel value on the green channel to replace the missing double-column pixel in the under-exposed image frame image frame. Estimate the value and as the pixel value of the corresponding pixel.
  • the step of obtaining the pixel recovery value of the missing double-column pixel in the over-exposed image frame as the pixel value of the corresponding pixel point according to the pixel value of the over-exposed double-column pixel on the red-green-blue channel respectively includes: Exposing pixel values of double-column pixels to calculate pixel estimation values of missing double-column pixels; and obtaining pixel recovery values of missing pixels on the green channel by interpolation;
  • the pixel recovery value on the red-blue channel is obtained by adding the difference recovery value of the missing pixel on the red/blue channel to the restored pixel value on the green channel to replace the estimated value of the missing double-column pixel in the over-exposed image frame. And as the pixel value of the corresponding pixel.
  • the interpolation method includes at least one of bilinear interpolation and cubic interpolation.
  • the step of obtaining the high dynamic range frame by combining the overexposed image frame and the underexposed image frame according to the pixel values of the pixels on the red, green and blue channels in the underexposed image frame and the overexposed image frame comprises:
  • a super-dynamic range frame is obtained by combining the over-exposed image frame and the under-exposed image frame according to the weight of each pixel.
  • the step of obtaining the weight of each pixel point according to the brightness of each pixel in the underexposed image frame and the overexposed image frame includes:
  • the adaptive underexposure threshold T TM and the adaptive overexposure threshold ⁇ 2 , ⁇ TM are calculated according to the preset underexposure threshold ⁇ and overexposure threshold ⁇ 2 using the following relationship:
  • P TO represents all the underexposure pixel sets smaller than ⁇ , and ⁇ 2 > ⁇ 2 ) means that all of P 2 is greater than T 2 overexposure pixel sets;
  • the weights of each pixel are calculated using the following relationship:
  • is the weight of the pixel whose luminance is Pi in the underexposed image frame
  • 0) 2 is the weight of the pixel whose luminance is P 2 in the overexposed image frame.
  • the step of combining the overexposed image frame and the underexposed image frame according to the weight of each pixel to obtain a high dynamic range frame comprises:
  • an embodiment of the present invention further provides a Bayer color filter array.
  • a high dynamic range video recording device wherein the device comprises:
  • a sensor module configured to perform exposure according to different sensitivities of parity and double columns, to obtain image frames of different exposure values of parity and double columns, wherein the odd double columns are the total number of columns of the image frame divided by 4 and the remaining 1 column
  • the even double column is a column in which the total number of columns of the image frame is divided by 4 by 2 and the remaining 3;
  • the decomposition module is connected to the sensor module and is configured to decompose the image frame into an underexposed image frame and an overexposed image frame, wherein the underexposure double column and the missing double column are sequentially spaced apart in the underexposed image frame, and the overexposed image frame is overexposed The double column and the missing double column are sequentially spaced apart;
  • the underexposure pixel recovery module is connected to the decomposing module, and is configured to, on the red, green and blue channels, obtain pixel deletion of the missing double column pixel in the underexposed image frame according to the pixel value of the underexposed double column pixel on the red, green and blue channels respectively.
  • the value is the pixel value of the corresponding pixel;
  • the overexposure pixel recovery module is connected to the decomposing module, and is configured to, for the overexposed image frame, obtain pixel missing of the missing double column pixel in the overexposed image frame according to the pixel value of the overexposed double column pixel on the red, green and blue channels respectively The value is the pixel value of the corresponding pixel;
  • the merging module is connected to the underexposure pixel recovery module and the allocated exposure pixel recovery module, and is configured to combine the overexposed image frame and the underexposed image frame according to the pixel values of the pixels on the red, green and blue channels in the underexposed image frame and the overexposed image frame. , get a high dynamic range frame.
  • the underexposure pixel recovery module is used to:
  • the pixel recovery value on the red-blue channel is obtained by adding the difference recovery value of the missing pixel on the red/blue channel to the restored pixel value on the green channel to replace the missing double-column pixel in the under-exposed image frame image frame. Estimate the value and as the pixel value of the corresponding pixel.
  • the overexposure pixel recovery module is used to:
  • the interpolation includes at least one of bilinear interpolation and cubic interpolation.
  • merge module is used to:
  • a super-dynamic range frame is obtained by combining the over-exposed image frame and the under-exposed image frame according to the weight of each pixel.
  • the merge module is also used to:
  • the adaptive underexposure thresholds Tl , nf:w and adaptive overexposure thresholds , 2 , ⁇ TM are calculated according to the preset underexposure threshold ⁇ and overexposure threshold ⁇ 2 using the following relationship:
  • P TO represents all the underexposure pixel sets smaller than ⁇ in Pi, ⁇ ⁇ 2 > ⁇ 2 ) means that all of P 2 is greater than T 2 overexposure pixel sets;
  • is the weight of the pixel whose luminance is Pi in the underexposed image frame
  • the weight of the pixel with a medium brightness of P 2 is the weight of the pixel with a medium brightness of P 2 .
  • the merge module is also used to:
  • an embodiment of the present invention provides a high dynamic range video recording method based on a Bayer color filter array, including:
  • an image frame with different exposure values of the parity double column is obtained, wherein the odd double column is the total number of columns of the image frame divided by 4 and the remaining 1 column, and the even double column is the image frame.
  • the total number of columns is divided by 4 and the remaining 2 and the remaining 3 columns;
  • the image frame is decomposed into an underexposed image frame and an overexposed image frame, wherein the underexposure double column and the missing double column are sequentially spaced apart in the underexposed image frame, and the overexposed double column and the missing double column in the overexposed image frame are sequentially spaced apart;
  • the pixel recovery values of the missing double-column pixels in the underexposed image frame are obtained as the pixel values of the corresponding pixel points according to the pixel values of the underexposed double-column pixels on the red, green and blue channels respectively; a frame, respectively, on the red, green and blue channels, according to the pixel values of the overexposed double-column pixels, the pixel recovery values of the missing double-column pixels in the over-exposed image frame are obtained as the pixel values of the corresponding pixel points; according to the under-exposed image frame and over-exposure Pixel values of pixels on the red, green and blue channels in the image frame are combined with the overexposed image frame and the underexposed image frame to obtain a high dynamic range frame;
  • the step of obtaining the high dynamic range frame by combining the overexposed image frame and the underexposed image frame according to the pixel values of the pixels on the red, green and blue channels in the underexposed image frame and the overexposed image frame comprises:
  • a super-dynamic range frame is obtained by combining the over-exposed image frame and the under-exposed image frame according to the weight of each pixel.
  • the step of acquiring the pixel recovery value of the missing double-column pixel in the under-exposed image frame as the pixel value of the corresponding pixel in the red-green-blue channel according to the pixel value of the under-exposed double-column pixel respectively includes: Exposing pixel values of double-column pixels to calculate pixel estimation values of missing double-column pixels; and obtaining pixel recovery values of missing pixels on the green channel by interpolation; Calculate the difference between the pixel value of the pixel on the red and blue channels and the pixel recovery value on the green channel; interpolate the difference between the pixel value of the pixel on the red and blue channel and the pixel recovery value on the green channel to obtain red/blue The recovery value of the difference of missing pixels on the channel;
  • the pixel recovery value on the red-blue channel is obtained by adding the difference recovery value of the missing pixel on the red/blue channel to the restored pixel value on the green channel to replace the missing double-column pixel in the under-exposed image frame image frame. Estimate the value and as the pixel value of the corresponding pixel.
  • the step of obtaining the pixel recovery value of the missing double-column pixel in the over-exposed image frame as the pixel value of the corresponding pixel point according to the pixel value of the over-exposed double-column pixel on the red-green-blue channel respectively includes: Exposing pixel values of double-column pixels to calculate pixel estimation values of missing double-column pixels; and obtaining pixel recovery values of missing pixels on the green channel by interpolation;
  • the pixel recovery value on the red-blue channel is obtained by adding the difference recovery value of the missing pixel on the red/blue channel to the restored pixel value on the green channel to replace the estimated value of the missing double-column pixel in the over-exposed image frame. And as the pixel value of the corresponding pixel.
  • the interpolation method includes at least one of bilinear interpolation and cubic interpolation.
  • the step of obtaining the weight of each pixel point according to the brightness of each pixel in the underexposed image frame and the overexposed image frame includes:
  • the adaptive underexposure threshold I TM and the adaptive overexposure threshold ⁇ 2 , ⁇ TM are calculated according to the preset underexposure threshold ⁇ and overexposure threshold ⁇ 2 using the following relationship:
  • P TO represents all underexposure pixel sets smaller than ⁇ in Pi, ⁇ ⁇ 2 > ⁇ 2 ) means that all of P 2 is greater than T 2 overexposure pixel sets;
  • is the weight of the pixel whose luminance is Pi in the underexposed image frame
  • 0) 2 is the weight of the pixel whose luminance is P 2 in the overexposed image frame.
  • the step of combining the overexposed image frame and the underexposed image frame according to the weight of each pixel to obtain a high dynamic range frame comprises:
  • the beneficial effects of the present invention are: obtaining an image frame with different exposure values of parity and double columns by performing different exposure times according to the parity double column configuration; decomposing the image frame into underexposure The image frame and the overexposed image frame, wherein the underexposed double column and the missing double column are sequentially spaced apart in the underexposed image frame, and the overexposed double column and the missing double column are sequentially spaced apart in the overexposed image frame; for the underexposed image frame, respectively Obtain pixel recovery values of missing double-column pixels in the underexposed image frame as pixel values of corresponding pixel points according to pixel values of under-exposed double-column pixels on the red-green-blue channel; for over-exposed image frames, respectively in red, green, and blue The pixel recovery value of the missing double-column pixel in the over-exposed image frame is obtained as the pixel value of the corresponding pixel point according to the pixel value of the over-exposed double
  • FIG. 1 is a schematic flow chart of a high dynamic range video recording method based on a Bayer color filter array according to a first embodiment of the present invention
  • Figure 2 is a Bayer diagram of exposure of the first embodiment of the present invention
  • step S12 in the first embodiment of the present invention
  • 4 is a schematic diagram of a method for acquiring pixel estimation values of missing double-column pixels in step S12 of the first embodiment of the present invention
  • 5 is a schematic diagram of a method for acquiring pixel recovery values of pixels on a red channel in step S12 of the first embodiment of the present invention
  • FIG. 6 is a schematic flowchart of a method for implementing step S14 in the first embodiment of the present invention.
  • FIG. 7 is a schematic diagram showing the results of a high dynamic range video recording method based on a Bayer color filter array of the present invention.
  • Figure 8 is a block diagram showing the structure of a high dynamic range video recording apparatus based on a Bayer color filter array according to a first embodiment of the present invention.
  • FIG. 1 is a schematic flow chart of a high dynamic range video recording method based on a Bayer color filter array according to a first embodiment of the present invention.
  • a high dynamic range video recording method based on a Bayer color filter array includes:
  • Step S10 performing exposure according to the different photosensitive time of the parity double column configuration, obtaining an image frame of different exposure values of the parity double column, wherein the odd double column is the total number of columns of the image frame divided by 4 and the remaining 1 column, the even double column The total number of columns for the image frame is divisible by 4 for the remaining 2 and the remaining 3 columns.
  • a color filter unit includes an R and B unit, and two G units are spatially arranged in a 2x2 arrangement, that is, each color filter unit occupies two rows and two columns.
  • step S10 in the case of double columns, the method of respectively exposing the odd double columns and the even double columns, that is, underexposing for odd double columns, overexposure for even double columns, obtaining parity Double-column one image frame with different exposure values.
  • the sensor by designing the sensor to configure different sensing times in odd and even double columns, image frames of two different exposure values are obtained, in which case each image frame has only half the width of the original frame.
  • the video is captured at a frame rate of 60 frames per second by the conventional method, only the frame rate of 30 frames per second is required in the present invention to achieve the same effect.
  • This single-frame multi-exposure approach ensures that each column exposure must contain a complete set of color filtering units.
  • Step S11 Decomposing the image frame into an underexposed image frame and an overexposed image frame, wherein the underexposed image In the frame, the underexposure double column and the missing double column are sequentially spaced, and the overexposed double column and the missing double column in the overexposed image frame are sequentially spaced.
  • step S11 the image frame obtained in step S10 is decomposed to obtain an underexposed image frame and an overexposed image frame.
  • the odd-numbered double columns of the underexposure remain unchanged, and the even double columns are changed to the missing columns, resulting in an underexposed image frame.
  • the even double columns of the overexposure remain unchanged, and the odd double columns are changed to the missing columns, resulting in an overexposed image frame. Therefore, the underexposure double column and the missing double column in the underexposed image frame are sequentially spaced apart, and the overexposed double column and the missing double column in the overexposed image frame are spaced apart at intervals.
  • Step S12 For the underexposed image frame, the pixel recovery values of the missing double column pixels in the underexposed image frame are obtained as the pixel values of the corresponding pixel points according to the pixel values of the underexposed double column pixels on the red, green and blue channels, respectively.
  • the recovery of RGB information in underexposed image frames includes:
  • Step S120 Calculate the pixel estimation value of the missing double-column pixel by using the pixel values of the adjacent under-exposed double-column pixels.
  • the even double column is the missing column
  • the adjacent double column interpolation is used to calculate the missing value of the even double column.
  • Step S121 Obtaining a pixel recovery value of the missing pixel on the green channel by interpolation.
  • the interpolation method includes at least one of bilinear interpolation and cubic interpolation.
  • other interpolation methods may be applied to recover the information of the green channel.
  • Step S122 Calculate the difference between the pixel value of the pixel on the red and blue channels and the pixel recovery value on the green channel.
  • the pixels of the red and blue channels are not directly restored, but the difference between the known pixels and the green channel of the red and blue channels is used to restore the red and blue channels.
  • Step S123 Perform interpolation calculation on the difference between the pixel value of the pixel on the red-blue channel and the pixel recovery value on the green channel, and obtain a recovery value of the difference of the missing pixel on the red/blue channel. That is, the red channel has been The difference R1 between the pixel and the green channel is interpolated to obtain a recovery value of the difference of the missing pixel points on the red channel.
  • Step S124 Adding a recovery value of the difference of the missing pixel points on the red/blue channel to the restored pixel value on the green channel to obtain a pixel recovery value on the red-blue channel to replace the under-exposed image frame image frame.
  • the estimated value of the double-column pixel is missing as the pixel value of the corresponding pixel.
  • R R1 +G.
  • the missing column is on the left side, it is easy to obtain the pixel recovery value of the missing column using a similar calculation method.
  • the missing value obtained by the even double column calculated by the interpolation of the adjacent odd double columns is replaced with the pixel recovery value of the missing column at this time.
  • the blue channel is processed in the same way as shown in Figure 5 and will not be described here.
  • the pixel recovery values of the missing columns on the red, green and blue channels in the underexposed image frame are obtained for subsequent processing of the merged frames.
  • step S13 for the overexposed image frame, the pixel recovery values of the missing double-column pixels in the overexposed image frame are obtained according to the pixel values of the overexposed double-column pixels on the red, green, and blue channels, respectively.
  • the pixel value of the pixel is obtained according to the pixel values of the overexposed double-column pixels on the red, green, and blue channels, respectively.
  • the pixel recovery value of the missing pixel on the green channel is obtained by interpolation; the difference between the pixel value of the pixel on the red and blue channel and the pixel recovery value on the green channel is calculated separately; the pixel value of the pixel on the red and blue channel is on the green channel
  • the difference between the pixel recovery values is calculated by interpolation, and the difference recovery value of the missing pixel points on the red/blue channel is obtained; the recovery value of the difference of the missing pixel points on the red/blue channel is added to the restored pixel value on the green channel,
  • the pixel recovery value on the red-blue channel is obtained to replace the estimated value of the missing double-column pixel in the over-exposed image frame and as the pixel value of the corresponding pixel.
  • Step S14 Combine the overexposed image frame and the underexposed image frame according to the pixel values of the pixels on the red, green and blue channels in the underexposed image frame and the overexposed image frame to obtain a high dynamic range frame.
  • Step S140 Acquire brightness of each pixel in the underexposed image frame and the overexposed image frame according to the pixel values of the pixels on the red, green and blue channels.
  • the brightness is acquired by using the prior art, for example, the brightness may be (R+G+B) /3.
  • the brightness may be (R+G+B) /3.
  • other methods may be used to obtain the brightness according to the pixel value of the pixel on the red, green and blue channel.
  • Step S141 The weight of each pixel is obtained according to the brightness of each pixel in the underexposed image frame and the overexposed image frame.
  • step S141 the adaptive underexposure threshold , ⁇ and the adaptive overexposure threshold ⁇ 2 are calculated according to the preset underexposure threshold ⁇ and overexposure threshold T 2 using the following relationship:
  • P TO represents all the underexposure pixel sets smaller than ⁇
  • ⁇ 2 > ⁇ 2 means that all of P 2 is greater than T 2 overexposed pixel sets.
  • I TM represents the upper boundary of the pixel value corresponding to the under-exposed position in P 2
  • T 2 ⁇ w represents the lower boundary of the pixel value corresponding to the over-exposure position in P 2 in Pj.
  • the weight of each pixel is calculated using the following relationship:
  • is the weight of the pixel whose luminance is Pi in the underexposed image frame
  • ⁇ 2 is the weight of the pixel whose luminance is ⁇ 2 in the overexposed image frame.
  • Step S142 Combine the overexposed image frame and the underexposed image frame according to the weight of each pixel to obtain a high dynamic range frame.
  • Gaussian blurring is performed on the weight graphs ⁇ , ⁇ 2 .
  • the picture in the upper left corner is an underexposed picture
  • the upper right corner is an overexposed picture
  • the lower left corner is a picture combined by a conventional method
  • the lower right corner It is a picture obtained by applying the method of the present invention. It can be seen that the picture taken according to the method of the present invention has a good contrast, no artifacts, and the transition band is natural, which is superior to the conventional method.
  • an image frame of different exposure values of parity and double columns is obtained by performing different exposure times according to the parity double column configuration; the image frame is decomposed into an underexposed image frame and an overexposed image frame, wherein underexposure In the image frame, the underexposure double column and the missing double column are sequentially spaced apart, and the overexposed double column and the missing double column are sequentially spaced apart in the overexposed image frame; for the underexposed image frame, respectively according to the underexposure double column on the red, green and blue channels
  • the pixel value of the pixel obtains the pixel recovery value of the missing double-column pixel in the under-exposed image frame as the pixel value of the corresponding pixel; for the over-exposed image frame, the pixel of the double-column pixel according to the over-exposure on the red-green-blue channel respectively
  • the value obtains the pixel recovery value of the missing double-column pixel in the over-exposed image frame as the pixel value of the
  • FIG. 8 is a schematic structural diagram of a high dynamic range video recording apparatus based on a Bayer color filter array according to a first embodiment of the present invention.
  • the high dynamic range video recording apparatus 10 based on the Bayer color filter array includes: a sensor module 11, a decomposition module 12, an underexposure pixel recovery module 13, an overexposure pixel recovery module 14, and a merging module 15.
  • the sensor module 11 is configured to perform exposure according to the different photosensitive time of the parity double column configurable, and obtain image frames of different exposure values of the parity double columns.
  • the decomposition module 12 is connected to the sensor module 11 for decomposing the image frame into an underexposed image frame and an overexposed image frame, wherein the underexposure double column and the missing double column are sequentially spaced apart in the underexposed image frame, and the overexposed image frame is over The exposure double column and the missing double column are sequentially spaced apart.
  • the underexposure pixel recovery module 13 is connected to the decomposing module 12 for acquiring, for the underexposed image frame, the pixels of the underexposed image frame in the underexposed image frame according to the pixel values of the underexposed double column pixel points on the red, green and blue channels respectively. The recovery value is taken as the pixel value of the corresponding pixel.
  • the overexposure pixel recovery module 14 is coupled to the decomposition module 12 for overexposed images
  • the frame, the pixel recovery values of the missing double-column pixels in the over-exposed image frame are obtained as the pixel values of the corresponding pixel points according to the pixel values of the over-exposed double-column pixels on the red, green and blue channels, respectively.
  • the merging module 15 is connected to the underexposure pixel recovery module 13 and the allocated exposure pixel recovery module 14 for combining the overexposed image frame and the underexposure according to the pixel values of the pixels on the red, green and blue channels in the underexposed image frame and the overexposed image frame.
  • Image frame get a high dynamic range frame.
  • one color filtering unit includes one R and ⁇ unit, and two G units are spatially arranged in a 2x2 arrangement, that is, each color filtering unit occupies two rows. And two columns.
  • the sensor module 11 exposes the odd-numbered double-column and the even-numbered double-column respectively, that is, under-exposed for the odd-numbered double-column, and over-exposed for the even-numbered double-column to obtain an image frame of different exposure values of the parity double-column. This exposure method ensures that each column exposure must contain a complete set of color filtering units.
  • the decomposition module 12 decomposes the image frame to obtain an underexposed image frame and an overexposed image frame. Specifically, in the original image frame, the odd-numbered double columns of the underexposure remain unchanged, and the even double columns are changed to the missing columns, resulting in an underexposed image frame. In the original image frame, the even double columns of overexposure remain unchanged, and the odd double columns are changed to missing columns, resulting in overexposed image frames. Therefore, in the underexposed image frame, the underexposure double column and the missing double column are sequentially spaced, and the overexposed double column and the missing double column in the overexposed image frame are sequentially spaced.
  • the underexposure pixel recovery module 13 is configured to: calculate pixel estimation values of the missing double column pixel points by using pixel values of adjacent underexposure double column pixels; for example, using an average interpolation method. Considering that the information of the green channel in the underexposed image frame is more than that of the red and blue channels, the pixel recovery value of the missing pixel on the green channel is first obtained by interpolation.
  • the interpolation method includes at least one of bilinear interpolation and cubic interpolation. However, in other embodiments of the present invention, other interpolation methods may be applied to recover the information of the green channel.
  • the interpolation method here is the same as the foregoing, that is, includes at least one of two-line interpolation and cubic interpolation.
  • the difference recovery value of the missing pixel on the red/blue channel is added to the restored pixel value on the green channel to obtain the pixel recovery value on the red-blue channel to replace the missing double-column pixel in the under-exposed image frame.
  • the blue channel is processed in the same way as above, and will not be described here. Finally, the pixel recovery values of the missing columns on the red, green and blue channels in the underexposed image frame are obtained for subsequent processing of the merged frames.
  • the overexposure pixel recovery module 14 is configured to: calculate pixel estimation values of the missing double column pixel points by using pixel values of adjacent overexposed double column pixel points; for example, using an average interpolation method. Considering that the information of the green channel in the underexposed image frame is more than that of the red and blue channels, the pixel of the missing pixel on the green channel is first obtained by interpolation.
  • the interpolation method includes at least one of bilinear interpolation and cubic interpolation. Of course, in other embodiments of the present invention, other interpolation methods may be applied to recover the information of the green channel.
  • the interpolation method here is the same as the foregoing, that is, includes at least one of bilinear interpolation and cubic interpolation.
  • the recovery value of the difference of the missing pixel points on the red/blue channel is added to the restored pixel value on the green channel to obtain the pixel recovery value on the red-blue channel to replace the estimation of the missing double-column pixel in the over-exposed image frame.
  • the blue channel is processed in the same manner as described above, and will not be described here. Finally, the pixel recovery values of the missing columns on the red, green and blue channels in the overexposed image frame are obtained for subsequent processing of the merged frames.
  • the merging module 15 is configured to: respectively acquire brightness of each pixel in the underexposed image frame and the overexposed image frame according to pixel values of the pixel points on the red, green and blue channels.
  • the brightness is obtained by using the prior art, for example, the brightness can be (R+G+B) /3, and of course, other methods can be used to obtain the brightness according to the pixel value of the pixel on the red, green and blue channel.
  • the weight of each pixel is then obtained from the brightness of each pixel in the underexposed image frame and the overexposed image frame.
  • the adaptive underexposure threshold T TM and the adaptive overexposure threshold ⁇ 2 , ⁇ TM are calculated according to the preset underexposure threshold 1 and the overexposure threshold T 2 using the following relationship:
  • ⁇ .new max xe u( Pl ⁇ Tl ) P2,x ,
  • is the weight of the pixel whose luminance is Pi in the underexposed image frame
  • 0) 2 is the weight of the pixel whose luminance is P 2 in the overexposed image frame.
  • the picture taken according to the method of the present invention has a good contrast, no artifacts, and the transition band is natural, which is superior to the conventional method.
  • the present invention obtains an image frame of different exposure values of parity and double columns by performing different exposure times according to the parity double column configuration; decomposing the image frame into an underexposed image frame and an overexposed image frame, wherein underexposure In the image frame, the underexposure double column and the missing double column are sequentially spaced apart, and the overexposed double column and the missing double column are sequentially spaced apart in the overexposed image frame; for the underexposed image frame, respectively according to the underexposure double column on the red, green and blue channels
  • the pixel value of the pixel obtains the pixel recovery value of the missing double-column pixel in the under-exposed image frame as the pixel value of the corresponding pixel; for the over-exposed image frame, the pixel of the double-column pixel according to the over-exposure on the red-green-blue channel respectively
  • the value obtains the pixel recovery value of the missing double-column pixel in the over-exposed image frame as the pixel value of the corresponding
  • Frame and underexposed image frames to obtain a high dynamic range frame, can overcome high-speed motion blur problems, reduce high-speed connection
  • the frame rate of the shot, the further processing of the weights when merging the image frames can also solve the problem of artifacts and excessive unnaturalness.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be used. Combined or can be integrated into another system, or some features can be ignored, or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the components displayed by the unit may or may not be physical units, ie may be located in one place, or It can also be distributed to multiple network elements. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the instructions include a plurality of instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a removable hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk, and the like, which can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Power Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Electromagnetism (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Computer Hardware Design (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Processing (AREA)
  • Color Television Image Signal Generators (AREA)
  • Studio Devices (AREA)
  • Television Systems (AREA)
  • Processing Of Color Television Signals (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

一种基于拜尔颜色滤波阵列的高动态范围视频录制方法和装置,包括通过按照奇偶双列配置不同的感光时间进行曝光,获得奇偶双列不同曝光值的一个图像帧;将图像帧分解为欠曝光图像帧和过曝光图像帧,其中欠曝光图像帧中欠曝光双列与缺失双列依次间隔分布,过曝光图像帧中过曝光双列与缺失双列依次间隔分布;分别在红绿蓝通道上根据欠曝光或过曝光双列像素点的像素值获取欠曝光图像帧和过曝光图像帧中缺失双列像素点的像素恢复值作为对应像素点的像素值;根据红绿蓝通道上像素点的像素值合并过曝光图像帧和欠曝光图像帧,获得一个高动态范围帧,通过上述方式,能够克服高速运动模糊问题,降低高速连拍的帧率。

Description

基于拜尔颜色滤波阵列的高动态范围视频录制方法和装置
【技术领域】
本发明涉及高动态范围(high dynamic range, 下文简称 HDR)视频录制技术 领域, 特别是涉及一种基于拜尔颜色滤波阵列的高动态范围视频录制方法和装 置。
【背景技术】
在数码相机中, 在大光比环境下拍摄时, 普通相机因受到动态范围的限制, 不能纪录极端亮或者暗的细节, 而 HDR视频录制在高光和低光区域都能获得比 正常拍摄更好的明暗层次。 实际场景的动态范围常常在 100dB 以上, 传感器是 数码影像设备成像的核心器件。 传统数码相机所釆用的传感器元件有 CCD ( Charge-coupled Device, 电荷禺合元件)或者 CMOS ( Complementary Metal Oxide Semiconductor,互补金属氧化物半导体),一般只能有大约 60dB的动态范 围, 如果釆用动态范围较窄的传感器记录动态范围较宽的场景, 则需要多次成 像。 以 100dB的场景为例, 可以先提高快门速度, 拍摄一张 0 ~ 60dB的欠曝光 照片, 再降低快门速度, 拍摄一张 40 ~ 100dB的过曝光照片, 最后将两张照片 融合成一张并重新计算灰阶映射关系。
现有厂商拍摄 HDR视频釆用高帧率传感器, 可高速连续拍摄若干张不同曝 光值的影像, 以 HDR的方式合成为一张照片。 不过拍摄移动中的主体时, 照片 就有可能出现残影现像。 在釆集到多帧图像后, 需要用特殊的 HDR算法将多帧 图像合并为一帧。
现代 CMOS传感器通常是一种颜色滤波阵列结构, 拜尔滤波阵列拍摄的图 像简称拜尔图。每个像素记录 10 ~ 14比特的单色信息, RGB三原色信息需要该 像素与周围像素插值计算获得。
现有的拍摄 HDR视频的方法主要包括两个关键技术点, 多曝光帧釆集和 HDR的帧合并算法。 多曝光帧釆集是以不同曝光值高速连拍获得多帧图片。 存 在两个缺点: 一方面, 如果场景中存在高速运动的物体, 则两帧之间无法做到 点点匹配, 合并后的图片很容易出现运动模糊。 另一方面, 高速连拍需要极高 的帧率, 限制了视频拍摄的快门下限。
而 HDR算法是首先基于多个曝光帧估计相机的亮度响应函数, 然后釆用灰 阶映射的方式计算新的灰阶表, 最后计算新的 HDR图像。 由于相机的亮度响应 函数通常需要对所有灰阶进行参数估计, 计算复杂度在 8位图像 (256个灰阶)时 尚可接受,但对于拜尔图(14比特)则计算量过于庞大,因此无法直接应用于 HDR 视频录制。 加权平均是另一种常用的帧合并方法。 以两帧图片合并为例, 合并 像素值 pnew可以用式 (1)计算:
P„ew = WiPi + (l + Wi) P2 (1) 其中, Pl P2分别是欠曝光图和过曝光图上某一指定位置的像素值, w1
0 ~ 1之间的数, 表示像素 1在合并像素中所占的权重。 传统方法在分配权重时 一般考虑的因素主要是像素的过曝光和欠曝光, 并通常设定阔值来检测曝光的 异常。 过曝光或者欠曝光的像素权重会远低于正常像素值。 以 8位图像为例, 釆用式 (2)计算权重:
其他
Figure imgf000004_0001
其中, 和丁2分别为欠曝光和过曝光阈值。 这种简单的区分过曝光和欠曝 光对场景的适应性不好, 很容易出现伪影, 并且在像素合并时会出现过渡不自 然的情况。
【发明内容】
本发明解决的技术问题是提供一种基于拜尔颜色滤波阵列的高动态范围视 频录制方法和装置, 能够克服高速运动模糊问题, 降低高速连拍的帧率。
为解决上述技术问题, 本发明实施例提供一种基于拜尔颜色滤波阵列的高 动态范围视频录制方法, 包括:
按照奇偶双列配置不同的感光时间进行曝光, 获得奇偶双列不同曝光值的 一个图像帧, 其中奇数双列为所述图像帧的总列数被 4整除和余 1 的列, 偶数 双列为所述图像帧的总列数被 4整除余 2和余 3的列;
将图像帧分解为欠曝光图像帧和过曝光图像帧, 其中欠曝光图像帧中欠曝 光双列与缺失双列依次间隔分布, 过曝光图像帧中过曝光双列与缺失双列依次 间隔分布; 对于欠曝光图像帧, 分别在红绿蓝通道上根据欠曝光双列像素点的像素值 获取欠曝光图像帧中缺失双列像素点的像素恢复值作为对应像素点的像素值; 对于过曝光图像帧, 分别在红绿蓝通道上根据过曝光双列像素点的像素值 获取过曝光图像帧中缺失双列像素点的像素恢复值作为对应像素点的像素值; 根据欠曝光图像帧和过曝光图像帧中红绿蓝通道上像素点的像素值合并过 曝光图像帧和欠曝光图像帧, 获得一个高动态范围帧。
其中, 分别在红绿蓝通道上根据欠曝光双列像素点的像素值获取欠曝光图 像帧中缺失双列像素点的像素恢复值作为对应像素点的像素值的步骤包括: 用相邻的欠曝光双列像素点的像素值计算缺失双列像素点的像素估计值; 利用插值获得绿通道上缺失像素点的像素恢复值;
分别计算红蓝通道上像素点的像素值与绿通道上的像素恢复值的差; 对红蓝通道上像素点的像素值与绿通道上的像素恢复值的差做插值计算, 获得红 /蓝通道上缺失像素点的差的恢复值;
利用红 /蓝通道上缺失像素点的差的恢复值与绿通道上的恢复像素值相加, 获得红蓝通道上的像素恢复值, 以替换欠曝光图像帧像帧中缺失双列像素点的 估计值, 并作为对应像素点的像素值。
其中, 分别在红绿蓝通道上根据过曝光双列像素点的像素值获取过曝光图 像帧中缺失双列像素点的像素恢复值作为对应像素点的像素值的步骤包括: 用相邻的过曝光双列像素点的像素值计算缺失双列像素点的像素估计值; 利用插值获得绿通道上缺失像素点的像素恢复值;
分别计算红蓝通道上像素点的像素值与绿通道上的像素恢复值的差; 对红蓝通道上像素点的像素值与绿通道上的像素恢复值的差做插值计算, 获得红 /蓝通道上缺失像素点的差的恢复值;
利用红 /蓝通道上缺失像素点的差的恢复值与绿通道上的恢复像素值相加, 获得红蓝通道上的像素恢复值, 以替换过曝光图像帧中缺失双列像素点的估计 值, 并作为对应像素点的像素值。
其中, 插值方法包括双线性插值、 立方体插值中的至少一种。
其中, 根据欠曝光图像帧和过曝光图像帧中红绿蓝通道上像素点的像素值 合并过曝光图像帧和欠曝光图像帧, 获得一个高动态范围帧的步骤包括:
根据红绿蓝通道上像素点的像素值分别获取欠曝光图像帧和过曝光图像帧 中每个像素点的亮度; 根据欠曝光图像帧和过曝光图像帧中每个像素点的亮度获取每个像素点的 权值;
根据每个像素点的权值合并过曝光图像帧和欠曝光图像帧, 获得一个高动 态范围帧。
其中, 根据欠曝光图像帧和过曝光图像帧中每个像素点的亮度获取每个像 素点的权值的步骤包括:
根据预设的欠曝光阔值 Ί 和过曝光阔值 Τ2利用以下关系式计算自适应的欠 曝光阔值 T ™和自适应的过曝光阔值 Τ2,η™:
Τι ,new
T2 ,new― miIlxG (P2>T2) P
其中, Pl P2分别为欠曝光图像帧和过曝光图像帧中像素点的亮度, P TO 表示 中所有小于 Ί\的欠曝光像素集, ^ρ2 > Τ2)表示 P2中所有大于 T2的过曝光 像素集;
根据自适应的欠曝光阔值 Tl,∞w和自适应的过曝光阔值 τ2, 利用以下关系式 计算每个像素点的权重:
Figure imgf000006_0001
其中, ωι为欠曝光图像帧中亮度为 Pi的像素点的权值, 0)2为过曝光图像帧 中亮度为 P2的像素点的权值。
其中, 根据每个像素点的权值合并过曝光图像帧和欠曝光图像帧, 获得一 个高动态范围帧的步骤包括:
釆用二维高斯滤波器与对每个像素点的权值做卷积;
釆用以下关系式进行帧合并计算, 并做对比度拉伸:
qneW;I = (1 _ ωι) ai q1;I + (1 _ ω2) a2 q2;I + ωι q2;I + ω2 ¾ i = 1,2,3 其中, ai = l - ^^ , a2 = l - ^^,用于增强对比度, 1和 q2 l分别为 RGB
127 127
图的三个色彩通道。
为解决上述技术问题, 本发明实施例还提供了一种基于拜尔颜色滤波阵列 的高动态范围视频录制装置, 其中, 装置包括:
传感器模块, 用于按照奇偶双列可配置不同的感光时间进行曝光, 获得奇 偶双列不同曝光值的图像帧, 其中奇数双列为所述图像帧的总列数被 4整除和 余 1的列, 偶数双列为所述图像帧的总列数被 4整除余 2和余 3的列;
分解模块, 与传感器模块连接, 用于将图像帧分解为欠曝光图像帧和过曝 光图像帧, 其中欠曝光图像帧中欠曝光双列与缺失双列依次间隔分布, 过曝光 图像帧中过曝光双列与缺失双列依次间隔分布;
欠曝光像素恢复模块, 与分解模块连接, 用于对于欠曝光图像帧, 分别在 红绿蓝通道上根据欠曝光双列像素点的像素值获取欠曝光图像帧中缺失双列像 素点的像素恢复值作为对应像素点的像素值;
过曝光像素恢复模块, 与分解模块连接, 用于对于过曝光图像帧, 分别在 红绿蓝通道上根据过曝光双列像素点的像素值获取过曝光图像帧中缺失双列像 素点的像素恢复值作为对应像素点的像素值;
合并模块, 与欠曝光像素恢复模块以及分配曝光像素恢复模块连接, 用于 根据欠曝光图像帧和过曝光图像帧中红绿蓝通道上像素点的像素值合并过曝光 图像帧和欠曝光图像帧, 获得一个高动态范围帧。
其中, 欠曝光像素恢复模块用于:
用相邻的欠曝光双列像素点的像素值计算缺失双列像素点的像素估计值; 利用插值获得绿通道上缺失像素点的像素恢复值;
分别计算红蓝通道上像素点的像素值与绿通道上的像素恢复值的差; 对红蓝通道上像素点的像素值与绿通道上的像素恢复值的差做插值计算, 获得红 /蓝通道上缺失像素点的差的恢复值;
利用红 /蓝通道上缺失像素点的差的恢复值与绿通道上的恢复像素值相加, 获得红蓝通道上的像素恢复值, 以替换欠曝光图像帧像帧中缺失双列像素点的 估计值, 并作为对应像素点的像素值。
其中, 过曝光像素恢复模块用于:
用相邻的过曝光双列像素点的像素值计算缺失双列像素点的像素估计值; 利用插值获得绿通道上缺失像素点的像素恢复值;
分别计算红蓝通道上像素点的像素值与绿通道上的像素恢复值的差; 对红蓝通道上像素点的像素值与绿通道上的像素恢复值的差做插值计算, 获得红 /蓝通道上缺失像素点的差的恢复值; 利用红 /蓝通道上缺失像素点的差的恢复值与绿通道上的恢复像素值相加, 获得红蓝通道上的像素恢复值, 以替换过曝光图像帧中缺失双列像素点的估计 值, 并作为对应像素点的像素值。
其中, 插值包括双线性插值、 立方体插值中的至少一种。
其中, 合并模块用于:
根据红绿蓝通道上像素点的像素值分别获取欠曝光图像帧和过曝光图像帧 中每个像素点的亮度;
根据欠曝光图像帧和过曝光图像帧中每个像素点的亮度获取每个像素点的 权值;
根据每个像素点的权值合并过曝光图像帧和欠曝光图像帧, 获得一个高动 态范围帧。
其中, 合并模块还用于:
根据预设的欠曝光阔值 Ί 和过曝光阔值 Τ2利用以下关系式计算自适应的欠 曝光阔值 Tl,nf:w和自适应的过曝光阔值 Τ2,η™:
Figure imgf000008_0001
T2 ,new― miIl G (P2>T2) Ρΐ,χ
其中, Pl Ρ2分别为欠曝光图像帧和过曝光图像帧中像素点的亮度, P TO 表示 Pi中所有小于 Ί\的欠曝光像素集, ^Ρ2 > Τ2)表示 P2中所有大于 T2的过曝光 像素集;
根据自适应的欠曝光阔值 Tl,∞w和自适应的过曝光阔值 τ2, 利用以下关系式 计算每
Figure imgf000008_0002
其中, ωι为欠曝光图像帧中亮度为 Pi的像素点的权值,
中亮度为 P2的像素点的权值。
其中, 合并模块还用于:
釆用二维高斯滤波器与对每个像素点的权值做卷积; 釆用以下关系式进行帧合并计算, 并做对比度拉伸:
qneW;I = (1 _ ωι) ai q1;I + (1 _ ω2) a2 q2;I + ωι q2;I + ω2 ¾ i = 1,2,3 其中, ai = l - ^^ , a2 = l - ^^,用于增强对比度, 1和 q2 l分别为 RGB
127 127 ' '
图的三个色彩通道。
为解决上述技术问题, 本发明实施例提供一种基于拜尔颜色滤波阵列的高 动态范围视频录制方法, 包括:
按照奇偶双列配置不同的感光时间进行曝光, 获得奇偶双列不同曝光值的 一个图像帧, 其中奇数双列为图像帧的总列数被 4整除和余 1 的列, 偶数双列 为图像帧的总列数被 4整除余 2和余 3的列;
将图像帧分解为欠曝光图像帧和过曝光图像帧, 其中欠曝光图像帧中欠曝 光双列与缺失双列依次间隔分布, 过曝光图像帧中过曝光双列与缺失双列依次 间隔分布;
对于欠曝光图像帧, 分别在红绿蓝通道上根据欠曝光双列像素点的像素值 获取欠曝光图像帧中缺失双列像素点的像素恢复值作为对应像素点的像素值; 对于过曝光图像帧, 分别在红绿蓝通道上根据过曝光双列像素点的像素值 获取过曝光图像帧中缺失双列像素点的像素恢复值作为对应像素点的像素值; 根据欠曝光图像帧和过曝光图像帧中红绿蓝通道上像素点的像素值合并过 曝光图像帧和欠曝光图像帧, 获得一个高动态范围帧;
其中, 根据欠曝光图像帧和过曝光图像帧中红绿蓝通道上像素点的像素值 合并过曝光图像帧和欠曝光图像帧, 获得一个高动态范围帧的步骤包括:
根据红绿蓝通道上像素点的像素值分别获取欠曝光图像帧和过曝光图像帧 中每个像素点的亮度;
根据欠曝光图像帧和过曝光图像帧中每个像素点的亮度获取每个像素点的 权值;
根据每个像素点的权值合并过曝光图像帧和欠曝光图像帧, 获得一个高动 态范围帧。
其中, 分别在红绿蓝通道上根据欠曝光双列像素点的像素值获取欠曝光图 像帧中缺失双列像素点的像素恢复值作为对应像素点的像素值的步骤包括: 用相邻的欠曝光双列像素点的像素值计算缺失双列像素点的像素估计值; 利用插值获得绿通道上缺失像素点的像素恢复值; 分别计算红蓝通道上像素点的像素值与绿通道上的像素恢复值的差; 对红蓝通道上像素点的像素值与绿通道上的像素恢复值的差做插值计算, 获得红 /蓝通道上缺失像素点的差的恢复值;
利用红 /蓝通道上缺失像素点的差的恢复值与绿通道上的恢复像素值相加, 获得红蓝通道上的像素恢复值, 以替换欠曝光图像帧像帧中缺失双列像素点的 估计值, 并作为对应像素点的像素值。
其中, 分别在红绿蓝通道上根据过曝光双列像素点的像素值获取过曝光图 像帧中缺失双列像素点的像素恢复值作为对应像素点的像素值的步骤包括: 用相邻的过曝光双列像素点的像素值计算缺失双列像素点的像素估计值; 利用插值获得绿通道上缺失像素点的像素恢复值;
分别计算红蓝通道上像素点的像素值与绿通道上的像素恢复值的差; 对红蓝通道上像素点的像素值与绿通道上的像素恢复值的差做插值计算, 获得红 /蓝通道上缺失像素点的差的恢复值;
利用红 /蓝通道上缺失像素点的差的恢复值与绿通道上的恢复像素值相加, 获得红蓝通道上的像素恢复值, 以替换过曝光图像帧中缺失双列像素点的估计 值, 并作为对应像素点的像素值。
其中, 插值方法包括双线性插值、 立方体插值中的至少一种。
其中, 根据欠曝光图像帧和过曝光图像帧中每个像素点的亮度获取每个像 素点的权值的步骤包括:
根据预设的欠曝光阔值 Ί\和过曝光阔值 Τ2利用以下关系式计算自适应的欠 曝光阔值 I ™和自适应的过曝光阔值 Τ2,η™:
Figure imgf000010_0001
T2 ,new― miIlxG (P2>T2) P
其中, Pl P2分别为欠曝光图像帧和过曝光图像帧中像素点的亮度, P TO 表示 Pi中所有小于 Ί\的欠曝光像素集, ^Ρ2 > Τ2)表示 P2中所有大于 T2的过曝光 像素集;
根据自适应的欠曝光阔值 Tl,∞w和自适应的过曝光阔值 τ2, 利用以下关系式 计
Figure imgf000010_0002
Figure imgf000011_0001
其中, ωι为欠曝光图像帧中亮度为 Pi的像素点的权值, 0)2为过曝光图像帧 中亮度为 P2的像素点的权值。
其中, 根据每个像素点的权值合并过曝光图像帧和欠曝光图像帧, 获得一 个高动态范围帧的步骤包括:
釆用二维高斯滤波器与对每个像素点的权值做卷积;
釆用以下关系式进行帧合并计算, 并做对比度拉伸:
qneW;I = (1 _ ωι) ai q1;I + (1 _ ω2) a2 q2;I + ωι q2;I + ω2 ¾ i = 1,2,3 其中, ai = l - ^^ , a2 = l - ^^,用于增强对比度, 1和 q2 l分别为 RGB
127 127 ' '
图的三个色彩通道。
通过上述方案, 与现有技术相比, 本发明的有益效果是: 通过按照奇偶双 列配置不同的感光时间进行曝光, 获得奇偶双列不同曝光值的一个图像帧; 将 图像帧分解为欠曝光图像帧和过曝光图像帧, 其中欠曝光图像帧中欠曝光双列 与缺失双列依次间隔分布, 过曝光图像帧中过曝光双列与缺失双列依次间隔分 布; 对于欠曝光图像帧, 分别在红绿蓝通道上根据欠曝光双列像素点的像素值 获取欠曝光图像帧中缺失双列像素点的像素恢复值作为对应像素点的像素值; 对于过曝光图像帧, 分别在红绿蓝通道上根据过曝光双列像素点的像素值获取 过曝光图像帧中缺失双列像素点的像素恢复值作为对应像素点的像素值; 再根 据欠曝光图像帧和过曝光图像帧中红绿蓝通道上像素点的像素值合并过曝光图 像帧和欠曝光图像帧, 获得一个高动态范围帧, 能够克服高速运动模糊问题, 降低高速连拍的帧率。
【附图说明】
图 1 是本发明第一实施例的基于拜尔颜色滤波阵列的高动态范围视频录制 方法的流程示意图;
图 2是本发明第一实施例的曝光的拜尔图;
图 3是本发明第一实施例中步骤 S12的实现方法的流程示意图;
图 4是本发明第一实施例的步骤 S12中缺失双列像素点的像素估计值获取 方法示意图; 图 5是本发明第一实施例的步骤 S12中红通道上像素点的像素恢复值获取 方法示意图;
图 6是本发明第一实施例中步骤 S14的实现方法的流程示意图;
图 7是本发明基于拜尔颜色滤波阵列的高动态范围视频录制方法的结果示 意图;
图 8是本发明第一实施例的基于拜尔颜色滤波阵列的高动态范围视频录制 装置的结构示意图。
【具体实施方式】
请参阅图 1 ,图 1是本发明第一实施例的基于拜尔颜色滤波阵列的高动态范 围视频录制方法的流程示意图。 如图 1 所示, 基于拜尔颜色滤波阵列的高动态 范围视频录制方法包括:
步骤 S10: 按照奇偶双列配置不同的感光时间进行曝光, 获得奇偶双列不同 曝光值的一个图像帧, 其中奇数双列为图像帧的总列数被 4整除和余 1 的列, 偶数双列为图像帧的总列数被 4整除余 2和余 3的列。
在本发明中釆用的拜尔图中, 一个颜色滤波单元包含一个 R和 B单元, 两 个 G单元, 空间上呈 2x2排列, 即每个颜色滤波单元占两行和两列。 具体地, 定义列标号为 c, 0<=c<=C, 其中 C为图像帧的总列数, 且 C一般为偶数, 优选 地, C为 4的倍数。 定义奇数双列为(cl, c2), 其中 cl被 4整除, c2被 4整除余 1 , c2=cl+l ; 定义偶数双列为(c3, c4), 其中 c3被 4整除余 2, c4被 4整除余 3 , c4=c3+l。 这样配置保证了奇数双列和偶数双列都完整的包含了一组拜尔色彩阵 列。
在步骤 S10中, 如图 2所示, 以双列为单位, 釆用奇数双列和偶数双列分 别曝光的方法, 即对于奇数双列进行欠曝光, 对于偶数双列进行过曝光, 得到 奇偶双列不同曝光值的一个图像帧。 例如, 通过设计传感器按照奇偶双列配置 不同的感光时间, 从而获得两个不同的曝光值的图像帧, 此时每个图像帧只有 原始帧的一半宽度。 如此, 如果用传统的方法 60帧 /秒的帧率拍摄的视频, 在本 发明中只需要 30帧 /秒的帧率就可以达到相同的效果。当然在本发明的其它实施 例中, 也可以是对于偶数双列进行欠曝光, 对于奇数双列进行过曝光。 这种单 帧多曝光的方法保证了每一列曝光必定包含一组完整的颜色滤波单元。
步骤 S11 : 将图像帧分解为欠曝光图像帧和过曝光图像帧, 其中欠曝光图像 帧中欠曝光双列与缺失双列依次间隔分布, 过曝光图像帧中过曝光双列与缺失 双列依次间隔分布。
在步骤 S11 中, 对步骤 S10中得到的图像帧进行分解, 得到欠曝光图像帧 和过曝光图像帧。 具体地, 在原始图像帧中, 欠曝光的奇数双列保持不变, 偶 数双列改为缺失列, 得到欠曝光图像帧。 在原始图像帧中, 过曝光的偶数双列 保持不变, 奇数双列改为缺失列, 得到过曝光图像帧。 所以欠曝光图像帧中欠 曝光双列与缺失双列依次间隔分布, 过曝光图像帧中过曝光双列与缺失双列依 次间隔分布。
步骤 S12: 对于欠曝光图像帧, 分别在红绿蓝通道上根据欠曝光双列像素点 的像素值获取欠曝光图像帧中缺失双列像素点的像素恢复值作为对应像素点的 像素值。
对于欠曝光图像帧, 如图 3所示, 对欠曝光图像帧中的 RGB信息的恢复包 括:
步骤 S120: 用相邻的欠曝光双列像素点的像素值计算缺失双列像素点的像 素估计值。
在欠曝光图像帧中, 偶数双列为缺失列, 在步骤 S120中, 釆用相邻的奇数 双列插值计算偶数双列的缺失值。 如图 4所示, 例如釆用平均的插值方式, 即 R3=(Rl+R5)/2, G4=(G2+G6)/2, G9=(G7+Gll)/2, B10=(B8+B12)/2。
步骤 S121 : 利用插值获得绿通道上缺失像素点的像素恢复值。
考虑到欠曝光图像帧中绿通道的信息比红蓝通道多, 首先用插值方法恢复 绿通道的信息。 该插值方法包括双线性插值、 立方体插值中的至少一种, 当然 在本发明的其它实施例中, 也可以应用其他的插值方法恢复绿通道的信息。
步骤 S122: 分别计算红蓝通道上像素点的像素值与绿通道上的像素恢复值 的差。
考虑到欠曝光图像帧中红蓝通道的像素数较少, 不直接针对红蓝通道的像 素进行恢复, 而是针对红蓝通道已知像素与绿通道的差, 以用于恢复红蓝通道 的像素。 具体地, 如图 5 所示, 以红通道为例, 红通道已知像素与绿通道的差 值 R1 , 记作 R1=R-G, 其中 R为成像位置的红色像素值, G是对应位置绿通道 的像素恢复值。
步骤 S123: 对所述红蓝通道上像素点的像素值与绿通道上的所述像素恢复 值的差做插值计算, 获得红 /蓝通道上缺失像素点的差的恢复值。 即对红通道已 知像素与绿通道的差值 Rl做插值, 以得到红通道上缺失像素点的差的恢复值。 步骤 S124: 利用红 /蓝通道上缺失像素点的差的恢复值与绿通道上的所述恢 复像素值相加, 获得红蓝通道上的像素恢复值, 以替换所述欠曝光图像帧像帧 中所述缺失双列像素点的估计值, 并作为对应像素点的像素值。
得到红通道上缺失像素点的差的恢复值 R1之后,红像素的估计值 R为红通 道上缺失像素点的差的恢复值 R1与绿通道上的恢复像素值 G之和,即 R=R1+G。 图 5 中的恢复信息中在右侧有一列是缺失列。 当缺失列在左侧时, 很容易用类 似的计算方法获得缺失列的像素恢复值。 以此时的缺失列的像素恢复值替换根 据相邻的奇数双列的插值计算得到的偶数双列所得的缺失值。 蓝通道釆用图 5 所示的相同的方法进行处理, 在此不再赘述。 最终得到欠曝光图像帧中红绿蓝 三通道上缺失列的像素恢复值, 以用于后续的合并帧的处理。
再参照图 1所示, 步骤 S13: 对于过曝光图像帧, 分别在红绿蓝通道上根据 过曝光双列像素点的像素值获取过曝光图像帧中缺失双列像素点的像素恢复值 作为对应像素点的像素值。
利用插值获得绿通道上缺失像素点的像素恢复值; 分别计算红蓝通道上像 素点的像素值与绿通道上的像素恢复值的差; 对红蓝通道上像素点的像素值与 绿通道上的像素恢复值的差做插值计算, 获得红 /蓝通道上缺失像素点的差的恢 复值;利用红 /蓝通道上缺失像素点的差的恢复值与绿通道上的恢复像素值相加, 获得红蓝通道上的像素恢复值, 以替换过曝光图像帧中缺失双列像素点的估计 值, 并作为对应像素点的像素值。 具体地与图 5 中过曝光图像帧中缺失双列像 素点的像素恢复值的获取方法相同, 在此不再赘述。 最终得到过曝光图像帧中 红绿蓝三通道上缺失列的像素恢复值, 以用于后续的合并帧的处理。
步骤 S14:根据欠曝光图像帧和过曝光图像帧中红绿蓝通道上像素点的像素 值合并过曝光图像帧和欠曝光图像帧 , 获得一个高动态范围帧。
传统的 HDR视频釆用的帧合并计算中出现伪影和过度不自然的问题,伪影 产生的根源在于帧合并时, 过渡带存在亮度反转的现象, 传统方法在选取高光 和低光阈值时无法避免这个问题, 在大变光区域这个问题就会显现出来。 而过 度不自然则是因为没有考虑孤立点和孤立块的情况。 由于自然场景的光线复杂, 很容易产生过曝光和欠曝光区域交织的情况, 简单的合并很容易造成过度不自 然的问题。 如图 6所示, 合并过曝光图像帧和欠曝光图像帧, 获得一个高动态 范围帧包括: 步骤 S140: 根据红绿蓝通道上像素点的像素值分别获取欠曝光图像帧和过 曝光图像帧中每个像素点的亮度。
在步骤 S140中, 亮度的获取釆用现有技术, 如亮度可以为 (R+G+B ) /3 , 当然也可以釆用其他方法根据红绿蓝通道上像素点的像素值来获取亮度。
步骤 S141 : 根据欠曝光图像帧和过曝光图像帧中每个像素点的亮度获取每 个像素点的权值。
在步骤 S141 中, 根据预设的欠曝光阔值 Ί\和过曝光阔值 T2利用以下关系 式计算自适应的欠曝光阔值 Τι, 和自适应的过曝光阔值 τ2, :
Τι ,new
T2 ,new― miIlxG (P2>T2) P
其中, Pl P2分别为欠曝光图像帧和过曝光图像帧中像素点的亮度, P TO 表示 中所有小于 Ί\的欠曝光像素集, ^ρ2 > Τ2)表示 P2中所有大于 T2的过曝光 像素集。 可见, I ™表示 P2中对应 中欠曝光位置的像素值上界, T2,∞w表示 Pj中对应 P2中过曝光位置的像素值下界。
然后根据自适应的欠曝光阔值 1^„和自适应的过曝光阔值!^, 利用以下关 系式计算每个像素点的权重:
C l:
Figure imgf000015_0001
p2 - τ2.
C 2 255-T2
0 其他
其中, ωι 为欠曝光图像帧中亮度为 Pi的像素点的权值, ω2为过曝光图像 帧中亮度为 Ρ2的像素点的权值。
步骤 S142: 根据每个像素点的权值合并过曝光图像帧和欠曝光图像帧, 获 得一个高动态范围帧。
为了避免过曝光和欠曝光区域交织导致的过度不自然的问题, 对权重图 ωι2做高斯模糊化。在步骤 S142中,首先釆用二维高斯滤波器与对每个像素点 的权值做卷积。 具体地, 釆用窗宽为 Η, 方差为 σ的二维高斯滤波器与 ωι , ω^ 卷积。 Η 与图像帧的尺寸有关, 一般选取 σ =Η/6, 当然方差 σ也可以根据需要 选取其他值。 然后釆用以下关系式进行帧合并计算, 并做对比度拉伸:
qneW;I = (1 _ ωι) ai q1;I + (1 _ ω2) a2 q2;I + ωι q2;I + ω2 ¾ i = 1,2,3 其中, ai = l - ^^ , a2 = l - ^^,用于增强对比度, 1和 q2 l分别为 RGB
127 127 ' '
图的三个色彩通道。
对依传统方法和本发明方法拍摄的图片进行对比, 如图 7 所示, 左上角的 图片是欠曝光的图片, 右上角是过曝光的图片, 左下角是应用传统方法合并的 图片, 右下角是应用本发明的方法获取的图片。 可见, 根据本发明的方法拍摄 的图片对比度较好, 不存在伪影, 过渡带自然, 要优于传统方法。
在本发明实施例中, 通过按照奇偶双列配置不同的感光时间进行曝光, 获 得奇偶双列不同曝光值的一个图像帧; 将图像帧分解为欠曝光图像帧和过曝光 图像帧, 其中欠曝光图像帧中欠曝光双列与缺失双列依次间隔分布, 过曝光图 像帧中过曝光双列与缺失双列依次间隔分布; 对于欠曝光图像帧, 分别在红绿 蓝通道上根据欠曝光双列像素点的像素值获取欠曝光图像帧中缺失双列像素点 的像素恢复值作为对应像素点的像素值; 对于过曝光图像帧, 分别在红绿蓝通 道上根据过曝光双列像素点的像素值获取过曝光图像帧中缺失双列像素点的像 素恢复值作为对应像素点的像素值; 再根据欠曝光图像帧和过曝光图像帧中红 绿蓝通道上像素点的像素值合并过曝光图像帧和欠曝光图像帧, 获得一个高动 态范围帧, 能够克服高速运动模糊问题, 降低高速连拍的帧率, 同时能够解决 伪影和过度不自然的问题。
请参阅图 8 ,图 8是本发明第一实施例的基于拜尔颜色滤波阵列的高动态范 围视频录制装置的结构示意图。 如图 8 所示, 基于拜尔颜色滤波阵列的高动态 范围视频录制装置 10包括: 传感器模块 11、 分解模块 12、 欠曝光像素恢复模 块 13、 过曝光像素恢复模块 14以及合并模块 15。 传感器模块 11用于按照奇偶 双列可配置不同的感光时间进行曝光, 获得奇偶双列不同曝光值的图像帧。 分 解模块 12与传感器模块 11连接, 用于将图像帧分解为欠曝光图像帧和过曝光 图像帧, 其中欠曝光图像帧中欠曝光双列与缺失双列依次间隔分布, 过曝光图 像帧中过曝光双列与缺失双列依次间隔分布。 欠曝光像素恢复模块 13与分解模 块 12连接, 用于对于欠曝光图像帧, 分别在红绿蓝通道上根据欠曝光双列像素 点的像素值获取欠曝光图像帧中缺失双列像素点的像素恢复值作为对应像素点 的像素值。 过曝光像素恢复模块 14与分解模块 12连接, 用于对于过曝光图像 帧, 分别在红绿蓝通道上根据过曝光双列像素点的像素值获取过曝光图像帧中 缺失双列像素点的像素恢复值作为对应像素点的像素值。 合并模块 15与欠曝光 像素恢复模块 13以及分配曝光像素恢复模块 14连接, 用于根据欠曝光图像帧 和过曝光图像帧中红绿蓝通道上像素点的像素值合并过曝光图像帧和欠曝光图 像帧, 获得一个高动态范围帧。
在本发明实施例中, 在本发明中釆用的拜尔图中, 一个颜色滤波单元包含 一个 R和 Β单元, 两个 G单元, 空间上呈 2x2排列, 即每个颜色滤波单元占两 行和两列。 传感器模块 11釆用奇数双列和偶数双列分别曝光的方法, 即对于奇 数双列进行欠曝光, 对于偶数双列进行过曝光, 得到奇偶双列不同曝光值的一 个图像帧。 这种曝光方法保证了每一列曝光必定包含一组完整的颜色滤波单元。 分解模块 12将图像帧进行分解, 得到欠曝光图像帧和过曝光图像帧。 具体地, 在原始图像帧中, 欠曝光的奇数双列保持不变, 偶数双列改为缺失列, 得到欠 曝光图像帧。 在原始图像帧中, 过曝光的偶数双列保持不变, 奇数双列改为缺 失列, 得到过曝光图像帧。 所以欠曝光图像帧中欠曝光双列与缺失双列依次间 隔分布, 过曝光图像帧中过曝光双列与缺失双列依次间隔分布。
具体地, 欠曝光像素恢复模块 13用于: 用相邻的欠曝光双列像素点的像素 值计算缺失双列像素点的像素估计值; 例如釆用平均的插值方式。 考虑到欠曝 光图像帧中绿通道的信息比红蓝通道多, 首先利用插值获得绿通道上缺失像素 点的像素恢复值。 该插值方法包括双线性插值、 立方体插值中的至少一种, 当 然在本发明的其它实施例中, 也可以应用其他的插值方法恢复绿通道的信息。 再分别计算红蓝通道上像素点的像素值与绿通道上的像素恢复值的差; 并对红 蓝通道上像素点的像素值与绿通道上的像素恢复值的差做插值计算, 获得红 /蓝 通道上缺失像素点的差的恢复值。 此处的插值方法与前述的相同, 即包括双线 性插值、 立方体插值中的至少一种。 最后利用红 /蓝通道上缺失像素点的差的恢 复值与绿通道上的恢复像素值相加, 获得红蓝通道上的像素恢复值, 以替换欠 曝光图像帧像帧中缺失双列像素点的估计值, 并作为对应像素点的像素值。 蓝 通道釆用上述相同的方法进行处理, 在此不再赘述。 最终得到欠曝光图像帧中 红绿蓝三通道上缺失列的像素恢复值, 以用于后续的合并帧的处理。
过曝光像素恢复模块 14用于: 用相邻的过曝光双列像素点的像素值计算缺 失双列像素点的像素估计值; 例如釆用平均的插值方式。 考虑到欠曝光图像帧 中绿通道的信息比红蓝通道多, 首先利用插值获得绿通道上缺失像素点的像素 恢复值; 该插值方法包括双线性插值、 立方体插值中的至少一种, 当然在本发 明的其它实施例中, 也可以应用其他的插值方法恢复绿通道的信息。 再分别计 算红蓝通道上像素点的像素值与绿通道上的像素恢复值的差; 并对红蓝通道上 像素点的像素值与绿通道上的像素恢复值的差做插值计算, 获得红 /蓝通道上缺 失像素点的差的恢复值; 此处的插值方法与前述的相同, 即包括双线性插值、 立方体插值中的至少一种。 最后利用红 /蓝通道上缺失像素点的差的恢复值与绿 通道上的恢复像素值相加, 获得红蓝通道上的像素恢复值, 以替换过曝光图像 帧中缺失双列像素点的估计值, 并作为对应像素点的像素值。 蓝通道釆用上述 相同的方法进行处理, 在此不再赘述。 最终得到过曝光图像帧中红绿蓝三通道 上缺失列的像素恢复值, 以用于后续的合并帧的处理。
合并模块 15用于: 根据红绿蓝通道上像素点的像素值分别获取欠曝光图像 帧和过曝光图像帧中每个像素点的亮度。 亮度的获取釆用现有技术, 如亮度可 以为 (R+G+B ) /3 , 当然也可以釆用其他方法根据红绿蓝通道上像素点的像素 值来获取亮度。 然后根据欠曝光图像帧和过曝光图像帧中每个像素点的亮度获 取每个像素点的权值。 具体地, 根据预设的欠曝光阔值1 和过曝光阔值 T2利用 以下关系式计算自适应的欠曝光阔值 T ™和自适应的过曝光阔值 τ2,η™:
Τι .new = maxxeu(Pl<Tl) P2,x ,
T2 ,new― miIl G (P2>T2) Ρΐ,χ
其中, Ρ1 Ρ2分别为欠曝光图像帧和过曝光图像帧中像素点的亮度, P TO 表示 中所有小于 Ί\的欠曝光像素集, ^Ρ2 > Τ2)表示 P2中所有大于 T2的过曝光 像素集。 可见, i w表示 P2中对应 中欠曝光位置的像素值上界, 1 ^表示 Pi中对应 P2中过曝光位置的像素值下界。 然后根据自适应的欠曝光阔值 Τι,η™和 自适应的过曝光阔值 T new利用以下关系式计算每个像素点的权重:
Figure imgf000018_0001
其中, ωι为欠曝光图像帧中亮度为 Pi的像素点的权值, 0)2为过曝光图像帧 中亮度为 P2的像素点的权值。 最后根据每个像素点的权值合并过曝光图像帧和 欠曝光图像帧, 获得一个高动态范围帧。
为了避免过曝光和欠曝光区域交织导致的过度不自然的问题, 对权重图 ωι ,ο^ 高斯模糊化。 即釆用二维高斯滤波器与对每个像素点的权值做卷积, 具 体地, 釆用窗宽为 Η, 方差为 σ的二维高斯滤波器与 ωι ,ω^ 卷积。 Η与图像帧 的尺寸有关, 一般选取 σ =Η/6, 当然方差 σ也可以根据需要选取其他值。 然后 釆用以下关系式进行帧合并计算, 并做对比度拉伸:
qnew,, = 0 _ ωι) ai q1;I + (1— ω2) a2 q2;I + ωι q2;I + ω2 ¾ i = 1,2,3 其中, ai = l - ^^ , a2 = l -^^,用于增强对比度, 1和 q2 l分别为 RGB
127 127 ' '
图的三个色彩通道。 根据本发明的方法拍摄的图片对比度较好, 不存在伪影, 过渡带自然, 要优于传统方法。
综上所述, 本发明通过按照奇偶双列配置不同的感光时间进行曝光, 获得 奇偶双列不同曝光值的一个图像帧; 将图像帧分解为欠曝光图像帧和过曝光图 像帧, 其中欠曝光图像帧中欠曝光双列与缺失双列依次间隔分布, 过曝光图像 帧中过曝光双列与缺失双列依次间隔分布; 对于欠曝光图像帧, 分别在红绿蓝 通道上根据欠曝光双列像素点的像素值获取欠曝光图像帧中缺失双列像素点的 像素恢复值作为对应像素点的像素值; 对于过曝光图像帧, 分别在红绿蓝通道 上根据过曝光双列像素点的像素值获取过曝光图像帧中缺失双列像素点的像素 恢复值作为对应像素点的像素值; 再根据欠曝光图像帧和过曝光图像帧中红绿 蓝通道上像素点的像素值合并过曝光图像帧和欠曝光图像帧, 获得一个高动态 范围帧, 能够克服高速运动模糊问题, 降低高速连拍的帧率, 合并图像帧时对 权重的进一步处理还可以解决伪影和过度不自然的问题。
在本发明所提供的几个实施例中, 应该理解到, 所揭露的系统, 装置和方 法, 可以通过其它的方式实现。 例如, 以上所描述的装置实施例仅仅是示意性 的, 例如, 所述模块或单元的划分, 仅仅为一种逻辑功能划分, 实际实现时可 以有另外的划分方式, 例如多个单元或组件可以结合或者可以集成到另一个系 统, 或一些特征可以忽略, 或不执行。 另一点, 所显示或讨论的相互之间的耦 合或直接耦合或通信连接可以是通过一些接口, 装置或单元的间接耦合或通信 连接, 可以是电性, 机械或其它的形式。 单元显示的部件可以是或者也可以不是物理单元, 即可以位于一个地方, 或者 也可以分布到多个网络单元上。 可以根据实际的需要选择其中的部分或者全部 单元来实现本实施例方案的目的。
另外, 在本发明各个实施例中的各功能单元可以集成在一个处理单元中, 也可以是各个单元单独物理存在, 也可以两个或两个以上单元集成在一个单元 中。 上述集成的单元既可以釆用硬件的形式实现, 也可以釆用软件功能单元的 形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或 使用时, 可以存储在一个计算机可读取存储介质中。 基于这样的理解, 本发明 的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或 部分可以以软件产品的形式体现出来, 该计算机软件产品存储在一个存储介质 中, 包括若干指令用以使得一台计算机设备(可以是个人计算机, 服务器, 或 者网络设备等)或处理器(processor )执行本发明各个实施例所述方法的全部或 部分步骤。 而前述的存储介质包括: U盘、 移动硬盘、 只读存储器 (ROM , Read-Only Memory ), 随机存取存储器(RAM, Random Access Memory )、 磁碟 或者光盘等各种可以存储程序代码的介质。
以上所述仅为本发明的实施例, 并非因此限制本发明的专利范围, 凡是利 用本发明说明书及附图内容所作的等效结构或等效流程变换, 或直接或间接运 用在其他相关的技术领域, 均同理包括在本发明的专利保护范围内。

Claims

权利 要求
1. 一种基于拜尔颜色滤波阵列的高动态范围视频录制方法, 其特征在于, 所述方法包括:
按照奇偶双列配置不同的感光时间进行曝光, 获得奇偶双列不同曝光值的 一个图像帧, 其中奇数双列为所述图像帧的总列数被 4整除和余 1 的列, 偶数 双列为所述图像帧的总列数被 4整除余 2和余 3的列;
将所述图像帧分解为欠曝光图像帧和过曝光图像帧, 其中所述欠曝光图像 帧中欠曝光双列与缺失双列依次间隔分布, 所述过曝光图像帧中过曝光双列与 缺失双列依次间隔分布;
对于所述欠曝光图像帧, 分别在红绿蓝通道上根据欠曝光双列像素点的像 素值获取所述欠曝光图像帧中缺失双列像素点的像素恢复值作为对应像素点的 像素值;
对于所述过曝光图像帧, 分别在红绿蓝通道上根据过曝光双列像素点的像 素值获取所述过曝光图像帧中缺失双列像素点的像素恢复值作为对应像素点的 像素值;
根据所述欠曝光图像帧和所述过曝光图像帧中所述红绿蓝通道上像素点的 像素值合并所述过曝光图像帧和所述欠曝光图像帧 , 获得一个高动态范围帧。
2. 根据权利要求 1所述的方法, 其特征在于, 所述分别在红绿蓝通道上根 据欠曝光双列像素点的像素值获取所述欠曝光图像帧中缺失双列像素点的像素 恢复值作为对应像素点的像素值的步骤包括:
用相邻的欠曝光双列像素点的像素值计算缺失双列像素点的像素估计值; 利用插值获得绿通道上缺失像素点的像素恢复值;
分别计算红蓝通道上像素点的像素值与绿通道上的所述像素恢复值的差; 对所述红蓝通道上像素点的像素值与绿通道上的所述像素恢复值的差做插 值计算, 获得红 /蓝通道上缺失像素点的差的恢复值;
利用红 /蓝通道上缺失像素点的差的恢复值与绿通道上的所述恢复像素值相 加, 获得红蓝通道上的像素恢复值, 以替换所述欠曝光图像帧像帧中所述缺失 双列像素点的估计值, 并作为对应像素点的像素值。
3. 根据权利要求 1所述的方法, 其特征在于, 所述分别在红绿蓝通道上根 据过曝光双列像素点的像素值获取所述过曝光图像帧中缺失双列像素点的像素 恢复值作为对应像素点的像素值的步骤包括:
用相邻的过曝光双列像素点的像素值计算缺失双列像素点的像素估计值; 利用插值获得绿通道上缺失像素点的像素恢复值;
分别计算红蓝通道上像素点的像素值与绿通道上的所述像素恢复值的差; 对所述红蓝通道上像素点的像素值与绿通道上的所述像素恢复值的差做插 值计算, 获得红 /蓝通道上缺失像素点的差的恢复值;
利用红 /蓝通道上缺失像素点的差的恢复值与绿通道上的所述恢复像素值相 加, 获得红蓝通道上的像素恢复值, 以替换所述过曝光图像帧中所述缺失双列 像素点的估计值, 并作为对应像素点的像素值。
4. 根据权利要求 2或 3任一项所述的方法, 其特征在于, 所述插值方法包 括双线性插值、 立方体插值中的至少一种。
5. 根据权利要求 1所述的方法, 其特征在于, 所述根据所述欠曝光图像帧 和所述过曝光图像帧中所述红绿蓝通道上像素点的像素值合并所述过曝光图像 帧和所述欠曝光图像帧 , 获得一个高动态范围帧的步骤包括:
根据所述红绿蓝通道上像素点的像素值分别获取所述欠曝光图像帧和所述 过曝光图像帧中每个像素点的亮度;
根据所述欠曝光图像帧和所述过曝光图像帧中每个像素点的亮度获取每个 像素点的权值;
根据每个像素点的所述权值合并所述过曝光图像帧和所述欠曝光图像帧, 获得一个高动态范围帧。
6. 根据权利要求 5所述的方法, 其特征在于, 所述根据所述欠曝光图像帧 和所述过曝光图像帧中每个像素点的亮度获取每个像素点的权值的步骤包括: 根据预设的欠曝光阔值 Ί 和过曝光阔值 Τ2利用以下关系式计算自适应的欠 曝光阔值 Τι,η™和自适应的过曝光阔值 τ2,η™:
Τι .new = maxxeu(Pl<Tl) Ρ2,χ ,
Τ2 ,new― miIl G (P2>T2) Ρΐ,χ
其中, Pi , Ρ2分别为所述欠曝光图像帧和所述过曝光图像帧中像素点的亮度, P TD表示 中所有小于 Ί\的欠曝光像素集, υ (Ρ2 > Τ2)表示 Ρ2中所有大于 Τ2 的过曝光像素集; 根据所述自适应的欠曝光阔值 Tl,∞w和所述自适应的过曝光阔值 Τ2,η™利用以 下关系式计算所述每个像素点的权重:
COl:
C02:
Figure imgf000023_0001
其中, ωι为所述欠曝光图像帧中亮度为 Pi的像素点的权值, ω2为所述过曝 光图像帧中亮度为 Ρ2的像素点的权值。
7. 根据权利要求 6所述的方法, 其特征在于, 所述根据每个像素点的所述 权值合并所述过曝光图像帧和所述欠曝光图像帧, 获得一个高动态范围帧的步 骤包括:
釆用二维高斯滤波器与对每个像素点的所述权值做卷积;
釆用以下关系式进行帧合并计算, 并做对比度拉伸:
qnew,, = 0— ωι) ai q1;I + (1— ω2) a2 q2;I + o¾ q2;I + ω2 ¾ i = 1,2,3 其中, ai = l - ^^ , a2 = l - ^^,用于增强对比度, 1和 q2 l分别为 RGB
127 127 ' '
图的三个色彩通道。
8. 一种基于拜尔颜色滤波阵列的高动态范围视频录制装置, 其特征在于, 所述装置包括:
传感器模块, 用于按照奇偶双列可配置不同的感光时间进行曝光, 获得奇 偶双列不同曝光值的图像帧, 其中奇数双列为所述图像帧的总列数被 4整除和 余 1的列, 偶数双列为所述图像帧的总列数被 4整除余 2和余 3的列;
分解模块, 与所述传感器模块连接, 用于将所述图像帧分解为欠曝光图像 帧和过曝光图像帧, 其中所述欠曝光图像帧中欠曝光双列与缺失双列依次间隔 分布, 所述过曝光图像帧中过曝光双列与缺失双列依次间隔分布;
欠曝光像素恢复模块, 与所述分解模块连接, 用于对于所述欠曝光图像帧, 分别在红绿蓝通道上根据欠曝光双列像素点的像素值获取所述欠曝光图像帧中 缺失双列像素点的像素恢复值作为对应像素点的像素值;
过曝光像素恢复模块, 与所述分解模块连接, 用于对于所述过曝光图像帧, 分别在红绿蓝通道上根据过曝光双列像素点的像素值获取所述过曝光图像帧中 缺失双列像素点的像素恢复值作为对应像素点的像素值;
合并模块, 与所述欠曝光像素恢复模块以及所述分配曝光像素恢复模块连 接, 用于根据所述欠曝光图像帧和所述过曝光图像帧中所述红绿蓝通道上像素 点的像素值合并所述过曝光图像帧和所述欠曝光图像帧, 获得一个高动态范围 帧。
9. 根据权利要求 8所述的装置, 其特征在于, 所述欠曝光像素恢复模块用 于:
用相邻的欠曝光双列像素点的像素值计算缺失双列像素点的像素估计值; 利用插值获得绿通道上缺失像素点的像素恢复值;
分别计算红蓝通道上像素点的像素值与绿通道上的所述像素恢复值的差; 对所述红蓝通道上像素点的像素值与绿通道上的所述像素恢复值的差做插 值计算, 获得红 /蓝通道上缺失像素点的差的恢复值;
利用红 /蓝通道上缺失像素点的差的恢复值与绿通道上的所述恢复像素值相 加, 获得红蓝通道上的像素恢复值, 以替换所述欠曝光图像帧像帧中所述缺失 双列像素点的估计值, 并作为对应像素点的像素值。
10. 根据权利要求 8所述的装置, 其特征在于, 所述过曝光像素恢复模块用 于:
用相邻的过曝光双列像素点的像素值计算缺失双列像素点的像素估计值; 利用插值获得绿通道上缺失像素点的像素恢复值;
分别计算红蓝通道上像素点的像素值与绿通道上的所述像素恢复值的差; 对所述红蓝通道上像素点的像素值与绿通道上的所述像素恢复值的差做插 值计算, 获得红 /蓝通道上缺失像素点的差的恢复值;
利用红 /蓝通道上缺失像素点的差的恢复值与绿通道上的所述恢复像素值相 加, 获得红蓝通道上的像素恢复值, 以替换所述过曝光图像帧中所述缺失双列 像素点的估计值, 并作为对应像素点的像素值。
11. 根据权利要求 9或 10任一项所述的装置, 其特征在于, 所述插值包括 双线性插值、 立方体插值中的至少一种。
12. 根据权利要求 8所述的装置, 其特征在于, 所述合并模块用于: 根据所述红绿蓝通道上像素点的像素值分别获取所述欠曝光图像帧和所述 过曝光图像帧中每个像素点的亮度;
根据所述欠曝光图像帧和所述过曝光图像帧中每个像素点的亮度获取每个 像素点的权值;
根据每个像素点的所述权值合并所述过曝光图像帧和所述欠曝光图像帧, 获得一个高动态范围帧。
13. 根据权利要求 12所述的装置, 其特征在于, 所述合并模块还用于: 根据预设的欠曝光阔值 和过曝光阔值 T2利用以下关系式计算自适应的欠 曝光阈值 Tl.∞w和自适应的过曝光阔值 T2,∞w
Tl .new Ρ2,χ, T2,new― miIl G (P2>T2)
其中, Pi , p2分别为所述欠曝光图像帧和所述过曝光图像帧中像素点的亮度, P TD表示 中所有小于 τ\的欠曝光像素集, P2 > T2)表示 p2中所有大于 τ2 的过曝光像素集;
根据所述自适应的欠曝光阈值 T kw和所述自适应的过曝光阔值 Τ2 利用以 下关系式计算所述每个像素点的权重:
Figure imgf000025_0001
其中, ωι为所述欠曝光图像帧中亮度为 的像素点的权值, ω2为所述过曝 光图像帧中亮度为 Ρ2的像素点的权值。
14. 根据权利要求 13所述的装置, 其特征在于, 所述合并模块还用于: 采用二维高斯滤波器与对每个像素点的所述权值做卷积;
采用以下关系式进行帧合并计算, 并做对比度拉伸:
Qnewa = (! - ωι) ai qu + (1— ω2) a2 q2j + ωι q2;I + ω2 qu i = 1,2,3 其中, ^ l^^^! - IP2 "127! ,用于增强对比度, ι和 q2 i分别为 RGB
127 " 127
图的三个色彩通道。
15. 一种基于拜尔颜色滤波阵列的高动态范围视频录制方法, 其特征在于, 所述方法包括:
按照奇偶双列配置不同的感光时间进行曝光, 获得奇偶双列不同曝光值的 一个图像帧, 其中奇数双列为所述图像帧的总列数被 4整除和余 1 的列, 偶数 双列为所述图像帧的总列数被 4整除余 2和余 3的列;
将所述图像帧分解为欠曝光图像帧和过曝光图像帧, 其中所述欠曝光图像 帧中欠曝光双列与缺失双列依次间隔分布, 所述过曝光图像帧中过曝光双列与 缺失双列依次间隔分布;
对于所述欠曝光图像帧, 分别在红绿蓝通道上根据欠曝光双列像素点的像 素值获取所述欠曝光图像帧中缺失双列像素点的像素恢复值作为对应像素点的 像素值;
对于所述过曝光图像帧, 分别在红绿蓝通道上根据过曝光双列像素点的像 素值获取所述过曝光图像帧中缺失双列像素点的像素恢复值作为对应像素点的 像素值;
根据所述欠曝光图像帧和所述过曝光图像帧中所述红绿蓝通道上像素点的 像素值合并所述过曝光图像帧和所述欠曝光图像帧 , 获得一个高动态范围帧; 其中, 所述根据所述欠曝光图像帧和所述过曝光图像帧中所述红绿蓝通道 上像素点的像素值合并所述过曝光图像帧和所述欠曝光图像帧, 获得一个高动 态范围帧的步骤包括:
根据所述红绿蓝通道上像素点的像素值分别获取所述欠曝光图像帧和所述 过曝光图像帧中每个像素点的亮度;
根据所述欠曝光图像帧和所述过曝光图像帧中每个像素点的亮度获取每个 像素点的权值;
根据每个像素点的所述权值合并所述过曝光图像帧和所述欠曝光图像帧, 获得一个高动态范围帧。
16. 根据权利要求 15所述的方法, 其特征在于, 所述分别在红绿蓝通道上 根据欠曝光双列像素点的像素值获取所述欠曝光图像帧中缺失双列像素点的像 素恢复值作为对应像素点的像素值的步骤包括:
用相邻的欠曝光双列像素点的像素值计算缺失双列像素点的像素估计值; 利用插值获得绿通道上缺失像素点的像素恢复值;
分别计算红蓝通道上像素点的像素值与绿通道上的所述像素恢复值的差; 对所述红蓝通道上像素点的像素值与绿通道上的所述像素恢复值的差做插 值计算, 获得红 /蓝通道上缺失像素点的差的恢复值;
利用红 /蓝通道上缺失像素点的差的恢复值与绿通道上的所述恢复像素值相 加, 获得红蓝通道上的像素恢复值, 以替换所述欠曝光图像帧像帧中所述缺失 双列像素点的估计值, 并作为对应像素点的像素值。
17. 根据权利要求 15所述的方法, 其特征在于, 所述分别在红绿蓝通道上 根据过曝光双列像素点的像素值获取所述过曝光图像帧中缺失双列像素点的像 素恢复值作为对应像素点的像素值的步骤包括:
用相邻的过曝光双列像素点的像素值计算缺失双列像素点的像素估计值; 利用插值获得绿通道上缺失像素点的像素恢复值;
分别计算红蓝通道上像素点的像素值与绿通道上的所述像素恢复值的差; 对所述红蓝通道上像素点的像素值与绿通道上的所述像素恢复值的差做插 值计算, 获得红 /蓝通道上缺失像素点的差的恢复值;
利用红 /蓝通道上缺失像素点的差的恢复值与绿通道上的所述恢复像素值相 加, 获得红蓝通道上的像素恢复值, 以替换所述过曝光图像帧中所述缺失双列 像素点的估计值, 并作为对应像素点的像素值。
18. 根据权利要求 16或 17任一项所述的方法, 其特征在于, 所述插值方法 包括双线性插值、 立方体插值中的至少一种。
19. 根据权利要求 15所述的方法, 其特征在于, 所述根据所述欠曝光图像 帧和所述过曝光图像帧中每个像素点的亮度获取每个像素点的权值的步骤包 括:
根据预设的欠曝光阔值 Ί 和过曝光阔值 Τ2利用以下关系式计算自适应的欠 曝光阔值 Τι,η™和自适应的过曝光阔值 τ2,η™:
Τι ,new ― m3.XxG (p1< i) P2x ,
Τ2 ,new― miIl G (P2>T2) Ρΐ,χ
其中, Pi , Ρ2分别为所述欠曝光图像帧和所述过曝光图像帧中像素点的亮度, P TD表示 中所有小于 Ί\的欠曝光像素集, υ (Ρ2 > Τ2)表示 Ρ2中所有大于 Τ2 的过曝光像素集;
根据所述自适应的欠曝光阔值 Tl,nf!W和所述自适应的过曝光阔值 T2,nf!W利用以 下关系式计算所述每个像素点的权重:
COi:
Figure imgf000027_0001
Figure imgf000028_0001
其中, Cft为所述欠曝光图像帧中亮度为 Pi的像素点的权值, ω2为所述过曝 光图像帧中亮度为 P2的像素点的权值。
20. 根据权利要求 19所述的方法, 其特征在于, 所述根据每个像素点的所 述权值合并所述过曝光图像帧和所述欠曝光图像帧, 获得一个高动态范围帧的 步骤包括:
釆用二维高斯滤波器与对每个像素点的所述权值做卷积;
釆用以下关系式进行帧合并计算, 并做对比度拉伸:
q 0— ωι) ai qu + (1 - ω2) a2 q2il + ωι q2il + ω2 qu i = 1,2,3 其中, a^i_ l ,a2^i-lP2"127l,用于增强对比度, q 和 q2i分别为 RGB
127 127
图的三个色彩通道
PCT/CN2014/080967 2014-06-27 2014-06-27 基于拜尔颜色滤波阵列的高动态范围视频录制方法和装置 WO2015196456A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2014/080967 WO2015196456A1 (zh) 2014-06-27 2014-06-27 基于拜尔颜色滤波阵列的高动态范围视频录制方法和装置
JP2016531711A JP6090820B2 (ja) 2014-06-27 2014-06-27 バイヤーカラーフィルタアレイに基づくハイダイナミックレンジの動画録画方法及び装置
US15/386,630 US9858644B2 (en) 2014-06-27 2016-12-21 Bayer color filter array based high dynamic range video recording method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/080967 WO2015196456A1 (zh) 2014-06-27 2014-06-27 基于拜尔颜色滤波阵列的高动态范围视频录制方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/386,630 Continuation US9858644B2 (en) 2014-06-27 2016-12-21 Bayer color filter array based high dynamic range video recording method and device

Publications (1)

Publication Number Publication Date
WO2015196456A1 true WO2015196456A1 (zh) 2015-12-30

Family

ID=54936517

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/080967 WO2015196456A1 (zh) 2014-06-27 2014-06-27 基于拜尔颜色滤波阵列的高动态范围视频录制方法和装置

Country Status (3)

Country Link
US (1) US9858644B2 (zh)
JP (1) JP6090820B2 (zh)
WO (1) WO2015196456A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018113685A1 (en) * 2016-12-20 2018-06-28 Guangdong Oppo Mobile Telecommunications Corp.,Ltd. Image processing method and device, and non-transitory computer-readable storage medium
CN109409299A (zh) * 2018-10-30 2019-03-01 盯盯拍(深圳)云技术有限公司 图像识别方法及图像识别装置
CN112204948A (zh) * 2019-09-19 2021-01-08 深圳市大疆创新科技有限公司 Hdr图像生成方法、滤光片阵列、图像传感器、图像处理芯片以及摄像装置
US11310532B2 (en) 2017-02-24 2022-04-19 Interdigital Vc Holdings, Inc. Method and device for reconstructing image data from decoded image data

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106464815B (zh) * 2014-11-21 2020-01-14 深圳市大疆创新科技有限公司 高动态范围成像的快速自适应混合的系统以及方法
JP6561571B2 (ja) * 2015-05-12 2019-08-21 ソニー株式会社 医療用撮像装置、撮像方法及び撮像装置
US10366478B2 (en) * 2016-05-18 2019-07-30 Interdigital Ce Patent Holdings Method and device for obtaining a HDR image by graph signal processing
KR102648747B1 (ko) 2019-01-18 2024-03-20 삼성전자주식회사 Hdr 이미지를 생성하기 위한 이미징 시스템 및 그것의 동작 방법
KR20220121328A (ko) * 2021-02-25 2022-09-01 삼성전자주식회사 영상 처리 방법 및 장치
CN114205533B (zh) * 2021-10-20 2024-06-07 浙江华感科技有限公司 视频帧校正方法、电子设备和计算机可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075688A (zh) * 2010-12-28 2011-05-25 青岛海信网络科技股份有限公司 单帧双曝光图像宽动态处理方法
CN102647565A (zh) * 2012-04-18 2012-08-22 格科微电子(上海)有限公司 像素阵列的排列方法、图像传感器及图像传感方法
US20120314100A1 (en) * 2011-06-09 2012-12-13 Apple Inc. Image sensor having hdr capture capability
CN104168403A (zh) * 2014-06-27 2014-11-26 深圳市大疆创新科技有限公司 基于拜尔颜色滤波阵列的高动态范围视频录制方法和装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101643319B1 (ko) 2010-01-11 2016-07-27 삼성전자주식회사 하이 다이나믹 레인지 영상을 획득하는 장치 및 그 방법
JP2014030073A (ja) * 2012-07-31 2014-02-13 Sony Corp 画像処理装置、および画像処理方法、並びにプログラム
CN102970549B (zh) 2012-09-20 2015-03-18 华为技术有限公司 图像处理方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075688A (zh) * 2010-12-28 2011-05-25 青岛海信网络科技股份有限公司 单帧双曝光图像宽动态处理方法
US20120314100A1 (en) * 2011-06-09 2012-12-13 Apple Inc. Image sensor having hdr capture capability
CN102647565A (zh) * 2012-04-18 2012-08-22 格科微电子(上海)有限公司 像素阵列的排列方法、图像传感器及图像传感方法
CN104168403A (zh) * 2014-06-27 2014-11-26 深圳市大疆创新科技有限公司 基于拜尔颜色滤波阵列的高动态范围视频录制方法和装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018113685A1 (en) * 2016-12-20 2018-06-28 Guangdong Oppo Mobile Telecommunications Corp.,Ltd. Image processing method and device, and non-transitory computer-readable storage medium
US10692199B2 (en) 2016-12-20 2020-06-23 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and device, and non-transitory computer-readable storage medium
US11310532B2 (en) 2017-02-24 2022-04-19 Interdigital Vc Holdings, Inc. Method and device for reconstructing image data from decoded image data
CN109409299A (zh) * 2018-10-30 2019-03-01 盯盯拍(深圳)云技术有限公司 图像识别方法及图像识别装置
CN112204948A (zh) * 2019-09-19 2021-01-08 深圳市大疆创新科技有限公司 Hdr图像生成方法、滤光片阵列、图像传感器、图像处理芯片以及摄像装置
WO2021051354A1 (zh) * 2019-09-19 2021-03-25 深圳市大疆创新科技有限公司 Hdr图像生成方法、滤光片阵列、图像传感器、图像处理芯片以及摄像装置

Also Published As

Publication number Publication date
JP2017502557A (ja) 2017-01-19
US9858644B2 (en) 2018-01-02
US20170103497A1 (en) 2017-04-13
JP6090820B2 (ja) 2017-03-08

Similar Documents

Publication Publication Date Title
WO2015196456A1 (zh) 基于拜尔颜色滤波阵列的高动态范围视频录制方法和装置
US8890983B2 (en) Tone mapping for low-light video frame enhancement
KR101032574B1 (ko) 이미지 안정화를 위한 방법, 장치, 전자 장치 및 컴퓨터 판독가능 매체
US9307212B2 (en) Tone mapping for low-light video frame enhancement
CN104168403B (zh) 基于拜尔颜色滤波阵列的高动态范围视频录制方法和装置
JP5016715B2 (ja) 画像のダイナミック・レンジを改善するためのマルチ露光パターン
US8199222B2 (en) Low-light video frame enhancement
JP5784587B2 (ja) 画像選択および結合の方法およびデバイス
US8581992B2 (en) Image capturing apparatus and camera shake correction method, and computer-readable medium
US20090167893A1 (en) RGBW Sensor Array
US20140218550A1 (en) Image capturing device and image processing method thereof
CN113228094A (zh) 图像处理器
JP2013509820A (ja) 符号化ローリングシャッタの方法およびシステム
WO2014044045A1 (zh) 图像处理方法及装置
WO2014106470A1 (zh) 图像处理方法、装置和拍摄终端
JP2011525058A (ja) 撮像システムにおける移動ぶれ及びゴースト防止のための方法及び装置
JP4217041B2 (ja) フィルタ処理
US11941791B2 (en) High-dynamic-range image generation with pre-combination denoising
JP2022179514A (ja) 制御装置、撮像装置、制御方法およびプログラム
Gil Rodríguez et al. High quality video in high dynamic range scenes from interlaced dual-iso footage
TWI502990B (zh) 產生高動態範圍影像的方法及其影像感測器
JP2016040870A (ja) 画像処理装置、像形成方法およびプログラム
CN117455805A (zh) 图像处理方法及装置、计算机可读存储介质、终端
JP2005064853A (ja) 撮像装置、プログラム、および色バランス補正方法
JP2018056848A (ja) 撮像装置、撮像装置の制御方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14895680

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016531711

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14895680

Country of ref document: EP

Kind code of ref document: A1