WO2018099010A1 - 控制方法、控制装置和电子装置 - Google Patents

控制方法、控制装置和电子装置 Download PDF

Info

Publication number
WO2018099010A1
WO2018099010A1 PCT/CN2017/085214 CN2017085214W WO2018099010A1 WO 2018099010 A1 WO2018099010 A1 WO 2018099010A1 CN 2017085214 W CN2017085214 W CN 2017085214W WO 2018099010 A1 WO2018099010 A1 WO 2018099010A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
unit
photosensitive
array
Prior art date
Application number
PCT/CN2017/085214
Other languages
English (en)
French (fr)
Inventor
唐城
Original Assignee
广东欧珀移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东欧珀移动通信有限公司 filed Critical 广东欧珀移动通信有限公司
Publication of WO2018099010A1 publication Critical patent/WO2018099010A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/44Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/587Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10144Varying exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4015Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns

Definitions

  • the present invention relates to image processing technologies, and in particular, to a control method, a control device, and an electronic device.
  • An existing image sensor includes an array of photosensitive pixel units and an array of filter cells disposed on the array of pixel cells, each array of filter cells covering a corresponding one of the photosensitive pixel units, each pixel unit including a plurality of photosensitive pixels.
  • the image sensor exposure output merged image may be controlled, and the merged image includes a merged pixel array, and the plurality of photosensitive pixels of the same photosensitive pixel unit are combined and output as one merged pixel. In this way, the signal-to-noise ratio of the merged image can be improved, however, the resolution of the merged image is lowered.
  • the patch image includes an image pixel unit array, the image pixel unit includes original pixels, and each photosensitive pixel corresponds to one original pixel.
  • the plurality of original pixels corresponding to the same filter unit have the same color, the resolution of the patch image cannot be improved. Therefore, it is necessary to convert the patch image into a pseudo original image by means of interpolation calculation, and the pseudo original image may include a pseudo original pixel arranged in a Bayer array.
  • the High Dynamic Range (HDR) function is applied, a multi-frame original image with different brightness is required, that is, multiple interpolation calculations are required, which is resource-intensive and time-consuming.
  • Embodiments of the present invention provide a control method, a control device, and an electronic device.
  • a control method for controlling an electronic device comprising an imaging device and a display, the imaging device comprising an image sensor, the image sensor comprising an array of photosensitive pixel units and a filter disposed on the array of photosensitive pixel units An array of light sheet units, each of the filter units covering a corresponding one of the photosensitive pixel units, each of the photosensitive pixel units comprising a plurality of photosensitive pixels; the control method comprising the steps of:
  • the highlight image comprising a highlight pixel arranged in a predetermined array
  • the patch image Converting the patch image into a low-bright image by interpolation calculation, the low-bright image comprising low-light pixels arranged in a predetermined array;
  • the highlight image and the low light image are merged to obtain a wide dynamic range image.
  • a control device for controlling an electronic device comprising an imaging device and a display, the imaging device comprising an image sensor, the image sensor comprising an array of photosensitive pixel units and a filter disposed on the array of photosensitive pixel units An array of light sheet units, each of the filter units covering a corresponding one of the photosensitive pixel units, each of the photosensitive pixel units comprising a plurality of photosensitive pixels; the control device comprising:
  • a first control module configured to control the photosensitive pixel unit array to expose and output a merged image
  • the merged image includes a merged pixel arranged in a predetermined array, and the plurality of the photosensitive pixels of the same photosensitive pixel unit are combined and output as one Merging pixels;
  • a second control module configured to control the photosensitive pixel unit array to expose and output a patch image, the patch image comprising a predetermined array of original pixels, each of the photosensitive pixels corresponding to one of the original pixels;
  • a first image processing module configured to convert the merged image into a highlight image by using a zoom calculation manner, where the highlight image includes a highlight pixel arranged in a predetermined array
  • a second image processing module configured to convert the color patch image into a low-light image by interpolation calculation, the low-light image comprising a low-light pixel arranged in a predetermined array
  • a merging module for combining the highlight image and the low-light image to obtain a wide dynamic range image.
  • An electronic device includes an imaging device, a display, and the above control device.
  • An electronic device includes a housing, a processor, a memory, a circuit board, a power supply circuit, and an imaging device, the circuit board being disposed inside a space enclosed by the housing, the processor and the memory being disposed at the a power circuit for powering various circuits or devices of the electronic device; the memory for storing executable program code; the processor reading an executable executable by the memory Program code to execute a program corresponding to the executable program code for executing the control method.
  • control method, the control device and the electronic device utilize the characteristics of different image output modes of the image sensor to have a difference in luminance, and use the images in the two modes to synthesize a wide dynamic range image, thereby reducing the time of the synthesis processing and improving the efficiency. .
  • FIG. 1 is a schematic flow chart of a control method according to an embodiment of the present invention.
  • FIG. 2 is a block diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 3 is a block diagram of an image sensor according to an embodiment of the present invention.
  • FIG. 4 is a circuit diagram of an image sensor according to an embodiment of the present invention.
  • FIG. 5 is a schematic view of a filter unit according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural view of an image sensor according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a merged image state according to an embodiment of the present invention.
  • Fig. 8 is a schematic diagram showing the state of a patch image according to an embodiment of the present invention.
  • FIG. 9 is a flow chart of a control method of some embodiments of the present invention.
  • FIG. 10 is a block diagram of a second image processing module in accordance with some embodiments of the present invention.
  • FIG. 11 is a schematic diagram of the state of a control method of some embodiments of the present invention.
  • Figure 12 is a schematic diagram of the state of the control method of some embodiments of the present invention.
  • FIG. 13 is a block diagram of a second computing unit of some embodiments of the present invention.
  • FIG. 14 is a flow diagram of a control method of some embodiments of the present invention.
  • 15 is a flow chart showing a control method of some embodiments of the present invention.
  • 16 is a schematic diagram of the state of a control method in accordance with some embodiments of the present invention.
  • 17 is a block diagram of an electronic device in accordance with some embodiments of the present invention.
  • a control method is used to control an electronic device.
  • the electronic device includes an imaging device and a display.
  • the image forming apparatus includes an image sensor including an array of photosensitive pixel units and an array of filter units disposed on the array of photosensitive pixel units, each of the filter units covering a corresponding one of the photosensitive pixel units, each of the photosensitive pixel units including a plurality of photosensitive pixels Pixel.
  • the control method includes the steps:
  • a control device 100 is used to control an electronic device 1000.
  • the electronic device 1000 further includes an imaging device and a display.
  • the imaging device includes an image sensor 200.
  • the control device 100 includes a first control module 110, a second control module 120, and a first image processing module 130, a second image processing module 140, and a merge module 150.
  • the control method of the embodiment of the present invention may be implemented by the control device 100 of the embodiment of the present invention, and the image sensor 200 applicable to the electronic device 1000 and used to control the imaging device of the electronic device 1000 outputs a wide dynamic range image.
  • electronic device 1000 includes a cell phone or tablet.
  • the imaging device includes a front camera or a rear camera.
  • the image sensor 200 of the embodiment of the present invention includes a photosensitive pixel unit array 210 and a filter unit array 220 disposed on the photosensitive pixel unit array 210.
  • the photosensitive pixel unit array 210 includes a plurality of photosensitive pixel units 210a, each of which includes a plurality of adjacent photosensitive pixels 212.
  • Each of the photosensitive pixels 212 includes a photosensitive device 2121 and a transfer tube 2122, wherein the photosensitive device 2121 can be a photodiode, and the transfer tube 2122 can be a MOS transistor.
  • the filter unit array 220 includes a plurality of filter units 220a, each of which covers a corresponding one of the photosensitive pixel units 210a.
  • the filter cell array 220 includes a Bayer array, that is, the adjacent four filter cells 220a are respectively a red filter unit and a blue filter unit. And two green filter units.
  • Each of the photosensitive pixel units 210a corresponds to the filter unit 220a of the same color. If one photosensitive pixel unit 210a includes a total of n adjacent photosensitive devices 2121, one filter unit 220a covers n in one photosensitive pixel unit 210a.
  • the photosensitive device 2121, the filter unit 220a may be of an integral structure, or may be assembled and connected by n independent sub-filters.
  • each of the photosensitive pixel units 210a includes four adjacent photosensitive pixels 212, and the adjacent two photosensitive pixels 212 collectively constitute one photosensitive pixel subunit 2120, and the photosensitive pixel subunit 2120 further includes a source follower.
  • the photosensitive pixel unit 210a further includes an adder 213. Wherein one end electrode of each of the transfer tubes 2122 of one photosensitive pixel subunit 2120 is connected to the cathode electrode of the corresponding photosensitive device 2121, and the other end of each transfer tube 2122 is commonly connected to the gate electrode of the source follower 2123. And connected to an analog to digital converter 2124 through the source follower 2123 source electrode.
  • the source follower 2122 may be a MOS transistor.
  • the two photosensitive pixel subunits 2120 are connected to the adder 213 through respective source followers 2123 and analog to digital converters 2124.
  • the adjacent four photosensitive devices 2121 of one photosensitive pixel unit 210a of the image sensor 200 of the embodiment of the present invention share a filter unit 220a of the same color, and each photosensitive device 2121 is connected to a transmission tube 2122.
  • the adjacent two photosensitive devices 2121 share a source follower 2123 and an analog to digital converter 2124 adjacent to each other.
  • the four photosensitive devices 2121 share an adder 213.
  • the adjacent four photosensitive devices 2121 are arranged in a 2*2 array.
  • the two photosensitive devices 2121 in one photosensitive pixel subunit 2120 may be in the same column.
  • the pixels may be combined to output a combined image.
  • the photosensitive device 2121 is used to convert light into electric charge, and the generated electric charge is proportional to the light intensity, and the transfer tube 2122 is used to control the conduction or disconnection of the circuit according to the control signal.
  • the source follower 2123 is used to convert the charge signal generated by the photosensitive device 2121 into a voltage signal.
  • Analog to digital converter 2124 is used to convert the voltage signal into a digital signal.
  • the adder 214 is for adding the two digital signals together for output for processing by the image processing module connected to the image sensor 200.
  • the image sensor 200 of the embodiment of the present invention may merge 16M photosensitive pixels into 4M, or output a merged image, and the merged image includes a merged pixel arranged in a predetermined array.
  • the plurality of photosensitive pixels 212 of the same photosensitive pixel unit 210a are combined and output as one combined pixel.
  • each photosensitive pixel unit 210a includes four photosensitive pixels 212, that is, after combining, the size of the photosensitive pixels is equivalent. It has become 4 times the original size, which improves the sensitivity of the photosensitive pixels.
  • the noise in the image sensor 200 is mostly random noise, it is possible for the photosensitive pixels before the combination to have noise in one or two pixels, and to combine the four photosensitive pixels into one large one. After the photosensitive pixel, the influence of the noise on the large pixel is reduced, that is, the noise is weakened, and the signal-to-noise ratio is improved.
  • the resolution of the merged image will also decrease as the pixel value decreases.
  • the patch image can be output through image processing.
  • the photosensitive device 2121 is used to convert light into electric charge, and the generated electric charge is proportional to the light intensity, and the transfer tube 2122 is used to control the conduction or disconnection of the circuit according to the control signal.
  • the source follower 2123 is used to convert the charge signal generated by the photosensitive device 2121 into a voltage signal.
  • Analog to digital converter 2124 is used to convert the voltage signal to a digital signal for processing by an image processing module coupled to image sensor 200.
  • the image sensor 200 of the embodiment of the present invention can also maintain a 16M photosensitive pixel output, or an output patch image, and the patch image includes an image pixel unit, and an image pixel unit.
  • the original pixel is arranged in a 2*2 array, the size of the original pixel is the same as the size of the photosensitive pixel, but since the filter unit 220a covering the adjacent four photosensitive devices 2121 is the same color, that is, although four The photosensitive device 2121 is respectively exposed, but the filter unit 220a covering the same color is the same. Therefore, the adjacent four original pixels of each image pixel unit are output in the same color, and the resolution of the image cannot be improved, and further processing is required. .
  • the module receives processing to output a true color image.
  • the color patch image is outputted separately for each photosensitive pixel at the time of output. Since the adjacent four photosensitive pixels have the same color, the four adjacent original pixels of one image pixel unit have the same color and are atypical Bayer arrays.
  • the image processing module cannot directly process the atypical Bayer array, that is, when the image sensor 200 adopts the same image processing module, it is compatible with the two modes of true color image output, that is, the true color image output in the merge mode and In the color block mode, the true color image output needs to be converted into a color block image, or the image pixel unit of the atypical Bayer array is converted into a pixel arrangement of a typical Bayer array.
  • the combined image when the combined image is output, four adjacent pixels of the same color are output as combined pixels, and under the same exposure condition, the formed patch images are respectively output with respect to the photosensitive pixels, and the sensitivity or brightness of the combined image is a color patch.
  • the condition for applying the High Dynamic Range (HDR) mode that is, outputting a plurality of frames of images with different exposure parameters for the same subject, and merging them.
  • HDR High Dynamic Range
  • the size of the merged image outputted in this mode is not the same as the size of the patch image outputted at 16M, and the merged image is required before the merge. Dimensional processing.
  • the merged image can be enlarged by a scaling calculation to match the size of the patch image.
  • the merged image can be converted into a highlighted image, wherein the highlighted image includes highlighted pixels arranged in a Bayer array.
  • each image pixel unit in the patch image is arranged in an atypical Bayer array, so that it cannot be directly processed by the image processing module to be merged with the highlight image, that is, the color patch is required
  • the image is processed, for example, the color block image can be converted into a low-light image by interpolation calculation, the low-light image includes low-light pixels, and the low-light pixels are arranged in a predetermined array, that is, a Bayer array. In this way, the highlight image and the low-light image can be combined, and then the true color image is output to the user through the display.
  • the low-luminance portion adopts the corresponding portion of the merged image, thereby improving the signal-to-noise ratio of the low-light region
  • the high-luminance portion adopts the corresponding portion of the patch image, thereby improving the resolution of the highlight region.
  • the control device 100 since the image sensor 200 is used to output images in two different modes, the output image has a difference in luminance, and the condition of the HDR mode is formed, compared to the 16M.
  • HDR is directly performed, that is, multi-frame color block images are output with different exposure values, and each block of color block images is interpolated and then combined, thereby saving time and improving efficiency.
  • the low-light image includes low-light pixels arranged in a Bayer array.
  • the low-light pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel.
  • Step S40 includes the steps of:
  • the second image processing module 140 includes a determination unit 142, a first calculation unit 144, and a second calculation unit 146.
  • Step S42 can be implemented by the determining unit 142
  • step S44 can be implemented by the first calculating unit 144
  • step S46 can be implemented by the second calculating unit 146.
  • the determining module 142 is configured to determine whether the color of the current pixel is the same as the color of the associated pixel.
  • the first calculating unit 144 is configured to use the pixel value of the associated pixel as the pixel value of the current pixel when the color of the current pixel is the same as the color of the associated pixel.
  • the second calculating unit 146 is configured to calculate a pixel value of the current pixel by interpolation according to the pixel value of the associated pixel unit when the color of the current pixel is different from the color of the associated pixel.
  • the current pixels are R3'3' and R5'5', and the corresponding associated pixels are R33 and B55, respectively.
  • the pixel values above and below should be broadly understood as the color attribute values of the pixel, such as color values.
  • the associated pixel unit includes a plurality of, for example, four, original pixels in the image pixel unit that are the same color as the current pixel and are adjacent to the current pixel.
  • the associated pixel corresponding to R5'5' is B55, which is adjacent to the image pixel unit where B55 is located and has the same color as R5'5'.
  • the image pixel units in which the associated pixel unit is located are image pixel units in which R44, R74, R47, and R77 are located, and are not other red image pixel units that are spatially farther from the image pixel unit in which B55 is located.
  • the red original pixels closest to B55 are R44, R74, R47 and R77, respectively, that is, the associated pixel unit of R55' is composed of R44, R74, R47 and R77, R5'5' and R44. , R74, R47 and R77 are the same color and adjacent.
  • the original pixel is converted into a low-light pixel in different ways for the current pixel in different situations, thereby Converting the patch image into a low-bright image, since the image sensor 200 adopts a special Bayer array structure filter, the image signal-to-noise ratio is improved, and in the image processing process, the patch image is interpolated by interpolation. , improve the resolution and resolution of the image.
  • step S46 includes the following steps:
  • S463 Calculate the pixel value of the current pixel according to the amount of the gradient and the weight.
  • the second computing unit 146 includes a first computing sub-unit 1461 , a second computing sub-unit 1462 , and a third computing unit 1463 .
  • Step S461 can be implemented by the first computing sub-unit 1461
  • step S462 can be implemented by the second computing sub-unit 1462
  • step S463 can be implemented by the third computing sub-unit 1463.
  • the first calculation sub-unit 1461 is used to calculate the amount of gradation in each direction of the associated pixel unit
  • the second calculation sub-unit 1462 is used to calculate the weights in the respective directions of the associated pixel unit
  • the third calculation unit 1463 is configured to And the weight calculates the pixel value of the current pixel.
  • the interpolation processing method is an energy gradation of the reference image in different directions, and the color corresponding to the current pixel is the same and the adjacent associated pixel unit is calculated by linear interpolation according to the gradation weight in different directions.
  • the pixel value of the pixel in the direction in which the amount of change in energy is small, the reference specific gravity is large, and therefore, the weight at the time of interpolation calculation is large.
  • R5'5' is interpolated from R44, R74, R47 and R77, and there are no original pixels of the same color in the horizontal and vertical directions, so the components of the color in the horizontal and vertical directions are first calculated from the associated pixel unit.
  • the components in the horizontal direction are R45 and R75
  • the components in the vertical direction are R54 and R57 which can be calculated by R44, R74, R47 and R77, respectively.
  • R45 R44*2/3+R47*1/3
  • R75 2/3*R74+1/3*R77
  • R54 2/3*R44+1/3*R74
  • R57 2/3 *R47+1/3*R77.
  • the amount of gradation and the weight in the horizontal and vertical directions are respectively calculated, that is, the gradation amount in different directions according to the color is determined to determine the reference weights in different directions at the time of interpolation, and the weight is smaller in the direction of the gradation amount. Large, and in the direction of larger gradient, the weight is smaller.
  • the gradient amount X1
  • the gradient amount X2
  • W1 X1/(X1+X2)
  • W2 X2/(X1+X2) .
  • R5'5' (2/3*R45+1/3*R75)*W2+(2/3*R54+1/3*R57)*W1. It can be understood that if X1 is greater than X2, W1 is greater than W2, so the weight in the horizontal direction is W2 when calculating, and the weight in the vertical direction is W1, and vice versa.
  • the pixel value of the current pixel can be calculated according to the interpolation method.
  • the original pixels can be converted into low-light pixels arranged in a typical Bayer array, that is, the low-light pixels of the adjacent four 2*2 arrays include one red low-light pixel, two green low-light pixels, and one Blue low-light pixels.
  • the manner of interpolation includes, but is not limited to, the manner of considering only the pixel values of the same color in the vertical and horizontal directions in the calculation disclosed in the embodiment, for example, the pixel values of other colors may also be referred to.
  • step S46 includes steps:
  • Step S46 includes steps:
  • the second image processing module 140 includes a white balance compensation unit 145a and a white balance compensation reduction unit 147a.
  • Step S45a may be implemented by the white balance compensation unit 145a
  • step S47a may be implemented by the white balance compensation restoration unit 147a.
  • the white balance compensation unit 145a is configured to perform white balance compensation on the patch image
  • the white balance compensation and restoration unit 147a is configured to perform white balance compensation and restoration on the low-light image.
  • the red and blue low-light pixels tend to refer not only to the color of the original pixel of the channel whose color is the same, but also Refer to the color weight of the original pixel of the green channel. Therefore, white balance compensation is required before interpolation to eliminate the influence of white balance in the interpolation calculation. In order not to destroy the white balance of the patch image, it is necessary to perform white balance compensation and reduction on the low-bright image after the interpolation, and restore according to the gain values of red, green and blue in the compensation.
  • step S46 includes steps:
  • the second image processing module 140 includes a dead pixel compensation module 145b.
  • Step S45b can be implemented by the dead point compensation module 145b.
  • the dead point compensation module 145b is used to perform dead point compensation on the patch image.
  • the image sensor 200 may have a dead pixel.
  • the bad point usually does not always present the same color as the sensitivity changes, and the presence of the dead pixel will affect the image quality. Therefore, in order to ensure accurate interpolation, The effect of the dead point requires bad point compensation before interpolation.
  • the original pixel may be detected.
  • the pixel compensation may be performed according to the pixel value of the other original image of the image pixel unit in which it is located.
  • step S46 includes steps:
  • the second image processing module 140 includes a crosstalk compensation module 145c.
  • Step S45c can be The crosstalk compensation module 145c is implemented.
  • the crosstalk compensation module 145c is configured to perform crosstalk compensation on the patch image.
  • the four photosensitive pixels in one photosensitive pixel unit cover the filter of the same color, and there may be a difference in sensitivity between the photosensitive pixels, so that the solid color region in the true color image converted by the low-light image will be Fixed spectral noise occurs, affecting the quality of the image. Therefore, it is necessary to perform crosstalk compensation on the patch image.
  • setting the compensation parameter includes the following steps:
  • the predetermined light environment may include, for example, an LED homogenizing plate, a color temperature of about 5000 K, and a brightness of about 1000 lux.
  • the imaging parameters may include a gain value, a shutter value, and a lens position. After the relevant parameters are set, the crosstalk compensation parameters are acquired.
  • a plurality of color patch images are acquired with the set imaging parameters in the set light environment, and merged into one patch image, thereby reducing the noise influence based on the single patch image as a basis for calibration.
  • the crosstalk compensation is aimed at substantially calibrating the photosensitive pixels with different sensitivity to the same level by compensation.
  • Gr_avg, Gr2/Gr_avg, Gr3/Gr_avg and Gr4/Gr_avg it can be understood that by calculating the ratio of the pixel value of each original pixel to the average pixel value of the image pixel unit, the deviation of each original pixel from the base value can be basically reflected. Four ratios are recorded and recorded as compensation parameters in the memory of the relevant device to compensate for each original pixel during imaging, thereby reducing crosstalk and improving image quality.
  • a patch image is first acquired with the same light environment and imaging parameters, and the patch image is crosstalk compensated according to the calculated compensation parameter, and the compensated Gr'_avg, Gr'1/Gr'_avg is calculated. , Gr'2/Gr'_avg, Gr'3/Gr'_avg and Gr'4/Gr'_avg. According to the calculation result, it is judged whether the compensation parameter is accurate, and the judgment can be considered according to the macroscopic and microscopic perspectives.
  • Microscopic means that a certain original pixel still has a large deviation after compensation, and is easily perceived by the user after imaging, while the macroscopic view is from a global angle, that is, when the total number of original pixels still having deviation after compensation is large, Even if the deviation of each original pixel is small, it is still perceived by the user as a whole. Therefore, it is sufficient to set a proportional threshold for the micro, and set a proportional threshold and a quantity threshold for the macro. In this way, the set crosstalk compensation parameters can be verified to ensure correct compensation parameters to reduce the impact of crosstalk on image quality.
  • step S46 the method further includes the following steps:
  • S47b Perform lens shading correction, demosaicing, noise reduction and edge sharpening on low-light images.
  • the second image processing module 140 includes a processing unit 147b.
  • Step S47b may be implemented by the processing unit 147b, or the processing unit 147b may be configured to perform lens shading correction, demosaicing, noise reduction, and edge sharpening processing on the low-light image.
  • the low-light pixels are arranged as a typical Bayer array, which can be processed by the processing unit 147b, including lens shading correction, demosaicing, noise reduction and edge sharpening. Processing, as such, the processed image can be used to synthesize with the highlighted image to obtain an HDR image.
  • an electronic device 1000 includes a housing 300, a processor 400, a memory 500, a circuit board 600, a power supply circuit 700, and an imaging device.
  • the image forming apparatus includes an image sensor 200, and the image sensor 200 includes an array of photosensitive pixel units and an array of filter units disposed on the array of photosensitive pixel units, each of the filter units covering a corresponding one of the photosensitive pixel units, and each of the photosensitive pixel units.
  • the plurality of photosensitive pixels are included, the circuit board 600 is disposed inside the space enclosed by the housing 300, the processor 400 and the memory 500 are disposed on the circuit board; the power supply circuit 700 is used to supply power to the respective circuits or devices of the electronic device 1000;
  • the processor 400 runs a program corresponding to the executable program code by reading the executable program code stored in the memory 500 to implement the control method of any of the embodiments of the present invention described above. In this process, the processor 400 is configured to perform the following steps:
  • the merged image comprises a merged pixel arranged in a predetermined array, and the plurality of photosensitive pixels of the same photosensitive pixel unit are combined and output as one merged pixel;
  • the highlight image comprising a highlighted pixel arranged in a predetermined array
  • the patch image Converting the patch image into a low-bright image by interpolation calculation, the low-bright image comprising low-light pixels arranged in a predetermined array;
  • the patch image includes image pixel units arranged in a predetermined array, the image pixel unit includes a plurality of original pixels of the same color, the low-light image includes a predetermined array of low-light pixels, and the low-light pixels include current pixels.
  • the original pixel includes an associated pixel corresponding to the current pixel, and the processor 400 is configured to perform the following steps:
  • the pixel value of the associated pixel is taken as the pixel value of the current pixel
  • the pixel value of the current pixel is calculated by interpolation according to the pixel value of the associated pixel unit, and the array of the image pixel unit includes the associated pixel unit, and the associated pixel unit is the same color as the current pixel and The current pixels are adjacent.
  • the processor 400 is configured to perform the following steps:
  • the processor 400 is configured to perform the following steps:
  • the processor 400 is configured to perform the following steps:
  • the processor 400 is configured to perform the following steps:
  • Crosstalk compensation is performed on the patch image.
  • the processor 400 is configured to perform the following steps:
  • control method and the control device 100 is also applicable to the electronic device 1000 of the embodiment of the present invention, and details are not described herein again.
  • the computer readable storage medium of the embodiment of the present invention has instructions stored therein.
  • the processor 400 of the electronic device 1000 executes an instruction
  • the electronic device 1000 executes the control method of the embodiment of the present invention, and the foregoing control method and control device 100
  • the explanations are also applicable to the computer readable storage medium of the embodiments of the present invention, and are not described herein again.
  • the electronic device 1000 and the computer readable storage medium utilize the characteristics of different image output modes of the image sensor to have a difference in luminance, and the images in the two modes are used to synthesize a wide dynamic range image, thereby reducing synthesis. Time to process, improve efficiency.
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

本发明公开了一种控制方法,用于控制电子装置。控制方法包括以下步骤:控制感光像素单元阵列曝光并输出合并图像;控制感光像素单元阵列曝光并输出色块图像;通过缩放计算方式将合并图像转换成高亮图像;通过插值计算方式将色块图像转换成低亮图像;合并高亮图像和低亮图像以得到宽动态范围图像。此外,本发明还公开了一种控制装置和电子装置。本发明实施方式的控制方法、控制装置和电子装置,利用图像传感器不同图像输出模式存在亮度差的特性,采用两种模式下的图像进行宽动态范围图像的合成,减少合成处理的时间,提高效率。

Description

控制方法、控制装置和电子装置
相关申请的交叉引用
本申请基于申请号为201611079317.4,申请日为2016年11月29日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本发明涉及图像处理技术,特别涉及一种控制方法、控制装置和电子装置。
背景技术
现有的一种图像传感器包括感光像素单元阵列和设置在像素单元阵列上的滤光片单元阵列,每个滤光片单元阵列覆盖对应一个感光像素单元,每个像素单元包括多个感光像素。工作时,可以控制图像传感器曝光输出合并图像,合并图像包括合并像素阵列,同一感光像素单元的多个感光像素合并输出作为一个合并像素。如此,可以提高合并图像的信噪比,然而,合并图像的解析度降低。当然,也可以控制图像传感器曝光输出色块图像,色块图像包括图像像素单元阵列,图像像素单元包括原始像素,每个感光像素对应一个原始像素。然而,由于同一滤光片单元对应的多个原始像素颜色相同,同样无法提高色块图像的解析度。因此,需要通过插值计算的方式将色块图像转化成仿原图像,仿原图像可以包括呈拜耳阵列排布的仿原像素。然而,在应用宽动态范围(High Dynamic Range,HDR)功能时,需要多帧亮度不同的仿原图像,也即是需要进行多次插值计算,耗费资源且耗时。
发明内容
本发明的实施例提供一种控制方法、控制装置和电子装置。
一种控制方法,用于控制电子装置,所述电子装置包括成像装置和显示器,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素;所述控制方法包括以下步骤:
控制所述感光像素单元阵列曝光并输出合并图像,所述合并图像包括预定阵列排布的合并像素,同一所述感光像素单元的多个所述感光像素合并输出作为一个合并像素;
控制所述感光像素单元阵列曝光并输出色块图像,所述色块图像包括预定阵列排布的原始像素,每个所述感光像素对应一个所述原始像素;
通过缩放计算方式将所述合并图像转换成高亮图像,所述高亮图像包括预定阵列排布的高亮像素;
通过插值计算方式将所述色块图像转换成低亮图像,所述低亮图像包括预定阵列排布的低亮像素;和
合并所述高亮图像和所述低亮图像以得到宽动态范围图像。
一种控制装置,用于控制电子装置,所述电子装置包括成像装置和显示器,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素;所述控制装置包括:
第一控制模块,用于控制所述感光像素单元阵列曝光并输出合并图像,所述合并图像包括预定阵列排布的合并像素,同一所述感光像素单元的多个所述感光像素合并输出作为一个合并像素;
第二控制模块,用于控制所述感光像素单元阵列曝光并输出色块图像,所述色块图像包括预定阵列排布的原始像素,每个所述感光像素对应一个所述原始像素;
第一图像处理模块,用于通过缩放计算方式将所述合并图像转换成高亮图像,所述高亮图像包括预定阵列排布的高亮像素;
第二图像处理模块,用于通过插值计算方式将所述色块图像转换成低亮图像,所述低亮图像包括预定阵列排布的低亮像素;和
合并模块,用于合并所述高亮图像和所述低亮图像以得到宽动态范围图像。
一种电子装置,包括成像装置、显示器和上述控制装置。
一种电子装置,包括壳体、处理器、存储器、电路板、电源电路和成像装置,所述电路板安置在所述壳体围成的空间内部,所述处理器和所述存储器设置在所述电路板上;所述电源电路,用于为所述电子装置的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行所述的控制方法。
本发明实施方式的控制方法、控制装置和电子装置,利用图像传感器不同图像输出模式存在亮度差的特性,采用两种模式下的图像进行宽动态范围图像的合成,减少合成处理的时间,提高效率。
本发明的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。
附图说明
本发明的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明 显和容易理解,其中:
图1是本发明实施方式的控制方法的流程示意图。
图2是本发明实施方式的电子装置的模块示意图。
图3是本发明实施方式的图像传感器的模块示意图。
图4是本发明实施方式的图像传感器的电路示意图。
图5是本发明实施方式的滤光片单元的示意图
图6是本发明实施方式的图像传感器的结构示意图。
图7是本发明实施方式的合并图像状态示意图。
图8是本发明实施方式的色块图像的状态示意图。
图9是本发明某些实施方式的控制方法的流程示意图。
图10是本发明某些实施方式的第二图像处理模块的模块示意图。
图11是本发明某些实施方式的控制方法的状态示意图。
图12是本发明某些实施方式的控制方法的状态示意图。
图13是本发明某些实施方式的第二计算单元的模块示意图。
图14是本发明某些实施方式的控制方法的流程示意图。
图15是本发明某些实施方式的控制方法的流程示意图。
图16是本发明某些实施方式的控制方法的状态示意图。
图17是本发明某些实施方式的电子装置的模块示意图。
具体实施方式
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本发明,而不能理解为对本发明的限制。
请参阅图1,本发明实施方式的控制方法,用于控制电子装置。电子装置包括成像装置和显示器。成像装置包括图像传感器,图像传感器包括感光像素单元阵列和设置在感光像素单元阵列上的滤光片单元阵列,每个滤光片单元覆盖对应一个感光像素单元,每个感光像素单元包括多个感光像素。控制方法包括步骤:
S10:控制感光像素单元阵列曝光并输出合并图像;
S20:控制感光像素单元阵列曝光并输出色块图像;
S30:通过缩放计算方式将合并图像转换成高亮图像;
S40:通过插值计算方式将色块图像转换成低亮图像;和
S50:合并高亮图像和低亮图像以得到宽动态范围图像。
请参阅图2,本发明实施方式的控制装置100,用于控制电子装置1000,电子装置1000还包括成像装置和显示器。成像装置包括图像传感器200。控制装置100包括第一控制模块110、第二控制模块120和第一图像处理模块130、第二图像处理模块140和合并模块150。作为例子,本发明实施方式的控制在方法可以由本发明实施方式的控制装置100实现,可应用于电子装置1000并用于控制电子装置1000的成像装置的图像传感器200输出宽动态范围图像。
在一些示例中,电子装置1000包括手机或平板电脑。成像装置包括前置相机或后置相机。
请一并参阅图3至图6,本发明实施方式的图像传感器200包括感光像素单元阵列210和设置在感光像素单元阵列210上的滤光片单元阵列220。
进一步地,感光像素单元阵列210包括多个感光像素单元210a,每个感光像素单元210a包括多个相邻的感光像素212。每个感光像素212包括一个感光器件2121和一个传输管2122,其中,感光器件2121可以是光电二极管,传输管2122可以是MOS晶体管。
滤光片单元阵列220包括多个滤光片单元220a,每个滤光片单元220a覆盖对应一个感光像素单元210a。
具体地,在某些示例中,滤光片单元阵列220包括拜耳阵列,也即是说,相邻的四个滤光片单元220a分别为一个红色滤光片单元、一个蓝色滤光片单元和两个绿色滤光片单元。
每一个感光像素单元210a对应同一颜色的滤光片单元220a,若一个感光像素单元210a中一共包括n个相邻的感光器件2121,那么一个滤光片单元220a覆盖一个感光像素单元210a中的n个感光器件2121,该滤光片单元220a可以是一体构造,也可以由n个独立的子滤光片组装连接在一起。
在某些实施方式中,每个感光像素单元210a包括四个相邻的感光像素212,相邻两个感光像素212共同构成一个感光像素子单元2120,感光像素子单元2120还包括一个源极跟随器2123及一个模数转换器2124。感光像素单元210a还包括一个加法器213。其中,一个感光像素子单元2120中的每个传输管2122的一端电极被连接到对应感光器件2121的阴极电极,每个传输管2122的另一端被共同连接至源极跟随器2123的闸极电极,并通过源极跟随器2123源极电极连接至一个模数转换器2124。其中,源极跟随器2122可以是MOS晶体管。两个感光像素子单元2120通过各自的源极跟随器2123及模数转换器2124连接至加法器213。
也即是说,本发明实施方式的图像传感器200的一个感光像素单元210a中相邻的四个感光器件2121共用一个同颜色的滤光片单元220a,每个感光器件2121对应连接一个传输管2122,相邻两个感光器件2121共用一个源极跟随器2123和一个模数转换器2124,相邻 的四个感光器件2121共用一个加法器213。
进一步地,相邻的四个感光器件2121呈2*2阵列排布。其中,一个感光像素子单元2120中的两个感光器件2121可以处于同一列。
在成像时,当同一滤光片单元220a下覆盖的两个感光像素子单元2120或者说四个感光器件2121同时曝光时,可以对像素进行合并进而可输出合并图像。
具体地,感光器件2121用于将光照转化为电荷,且产生的电荷与光照强度成比例关系,传输管2122用于根据控制信号来控制电路的导通或断开。当电路导通时,源极跟随器2123用于将感光器件2121经光照产生的电荷信号转化为电压信号。模数转换器2124用于电压信号转换为数字信号。加法器214用于将两路数字信号相加共同输出,以供与图像传感器200相连的图像处理模块处理。
请参阅图7,以16M的图像传感器200举例来说,本发明实施方式的图像传感器200可以将16M的感光像素合并成4M,或者说,输出合并图像,合并图像包括预定阵列排布的合并像素,同一感光像素单元210a的多个感光像素212合并输出作为一个合并像素,在一些示例中,每个感光像素单元210a包括四个感光像素212,也即是说,合并后,感光像素的大小相当于变成了原来大小的4倍,从而提升了感光像素的感光度。此外,由于图像传感器200中的噪声大部分都是随机噪声,对于合并之前的感光像素的来说,有可能其中一个或两个像素中存在噪点,而在将四个感光像素合并成一个大的感光像素后,减小了噪点对该大像素的影响,也即是减弱了噪声,提高了信噪比。
但在感光像素大小变大的同时,由于像素值降低,合并图像的解析度也将降低。
在成像时,当同一滤光片单元220a下覆盖的四个感光器件2121依次曝光时,经过图像处理可以输出色块图像。
具体地,感光器件2121用于将光照转化为电荷,且产生的电荷与光照强度成比例关系,传输管2122用于根据控制信号来控制电路的导通或断开。当电路导通时,源极跟随器2123用于将感光器件2121经光照产生的电荷信号转化为电压信号。模数转换器2124用于电压信号转换为数字信号,以供与图像传感器200相连的图像处理模块处理。
请参阅图8,以16M的图像传感器200举例来说,本发明实施方式的图像传感器200还可以保持16M的感光像素输出,或者说输出色块图像,色块图像包括图像像素单元,图像像素单元包括2*2阵列排布的原始像素,该原始像素的大小与感光像素大小相同,然而由于覆盖相邻四个感光器件2121的滤光片单元220a为同一颜色,也即是说,虽然四个感光器件2121分别曝光,但覆盖其的滤光片单元220a颜色相同,因此,输出的每个图像像素单元的相邻四个原始像素颜色相同,仍然无法提高图像的解析度,需要进行进一步的处理。
可以理解,合并图像在输出时,四个相邻的同色感光像素以合并像素输出,如此,合并图像中的四个相邻的合并像素仍可看作是典型的拜耳阵列,可以直接被图像处理模块接收进行处理以输出真彩图像。而色块图像在输出时每个感光像素分别输出,由于相邻四个感光像素颜色相同,因此,一个图像像素单元的四个相邻原始像素的颜色相同,是非典型的拜耳阵列。而图像处理模块无法对非典型拜耳阵列直接进行处理,也即是说,在图像传感器200采用同一图像处理模块时,为兼容两种模式的真彩图像输出即合并模式下的真彩图像输出及色块模式下的真彩图像输出,需对色块图像进行转化处理,或者说将非典型拜耳阵列的图像像素单元转化为典型拜耳阵列的像素排布。
进一步地,合并图像输出时,四个相邻的同色感光像素以合并像素输出,在同一曝光条件下,相对于感光像素分别输出形成的色块图像,合并图像的感光度或者说亮度是色块图像的四倍,而这种亮度差异构成了应用宽动态范围(High Dynamic Range,HDR)模式的条件,即对同一被摄物采用不同曝光参数输出多帧图像,并进行合并。如此,利用图像传感器200两种输出模式在同一曝光条件下可输出不同亮度的图像的特性,可直接进行HDR模式图像的处理。
然而,需要注意的是,由于合并图像相当于以4M输出,因此,在这种模式下输出的合并图像的尺寸与以16M输出的色块图像的尺寸并不相同,在合并前需要对合并图像的进行尺寸处理。
例如,可以通过缩放计算方式,将合并图像进行放大,使其与色块图像的尺寸一致。缩放计算后,可将合并图像转转换为高亮图像,其中,高亮图像包括呈拜耳阵列排布的高亮像素。
如上述,色块图像中的每个图像像素单元以非典型拜耳阵列排布,因此无法直接使用图像处理模块进行处理,以使其与高亮图像进行合并,也即是说,需要对色块图像进行处理,例如可以通过插值计算方式将色块图像转换为低亮图像,低亮图像包括低亮像素,低亮像素以预定阵列也即是拜耳阵列排布。如此,可对高亮图像与低亮图像进行合并,进而转而为真彩图像通过显示器输出给用户。合成中,低亮度部分采用合并图像的相应部分,从而提高低亮区域的信噪比,而高亮度部分采用色块图像的相应部分,从而提高高亮区域的解析力。
如此,本发明实施方式的控制方法、控制装置100及电子装置1000,由于采用的图像传感器200可以两种不同模式输出图像,所输出图像存在亮度差异,形成HDR模式的条件,相较于在16M模式下直接进行HDR,也即是,以不同曝光值输出多帧色块图像,并对每帧色块图像进行插值处理后再进行合并,节省了时间,提高效率。
请参阅图9及图10,在某些实施方式中,低亮图像包括呈拜耳阵列排布的低亮像素。 低亮像素包括当前像素,原始像素包括与当前像素对应的关联像素。
步骤S40包括步骤:
S42:判断当前像素的颜色与关联像素的颜色是否相同;
S44:在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;和
S46:在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过插值方式计算当前像素的像素值。
在某些实施方式中,第二图像处理模块140包括判断单元142、第一计算单元144和第二计算单元146。步骤S42可以由判断单元142实现,步骤S44可以由第一计算单元144实现,步骤S46可以由第二计算单元146实现。或者说,判断模块142用于判断当前像素的颜色与关联像素的颜色是否相同。第一计算单元144用于在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值。第二计算单元146用于在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过插值方式计算当前像素的像素值。
请参阅图11,以图11为例,当前像素为R3’3’和R5’5’,对应的关联像素分别为R33和B55。
在获取当前像素R3’3’时,由于R33’与对应的关联像素R33的颜色相同,因此在转化时直接将R33的像素值作为R33’的像素值。
在获取当前像素R5’5’时,由于R5’5’与对应的关联像素B55的颜色不相同,显然不能直接将B55的像素值作为R5’5’的像素值,需要根据R5’5’的关联像素单元通过插值的方式计算得到。
需要说明的是,以上及下文中的像素值应当广义理解为该像素的颜色属性数值,例如色彩值。
关联像素单元包括多个,例如4个,颜色与当前像素相同且与当前像素相邻的图像像素单元中的原始像素。
需要说明的是,此处相邻应做广义理解,以图11为例,R5’5’对应的关联像素为B55,与B55所在的图像像素单元相邻的且与R5’5’颜色相同的关联像素单元所在的图像像素单元分别为R44、R74、R47、R77所在的图像像素单元,而并非在空间上距离B55所在的图像像素单元更远的其他的红色图像像素单元。其中,与B55在空间上距离最近的红色原始像素分别为R44、R74、R47和R77,也即是说,R55’的关联像素单元由R44、R74、R47和R77组成,R5’5’与R44、R74、R47和R77的颜色相同且相邻。
如此,针对不同情况的当前像素,采用不同方式的将原始像素转化为低亮像素,从而 将色块图像转化为低亮图像,由于图像传感器200采用了特殊的拜耳阵列结构的滤光片,提高了图像信噪比,并且在图像处理过程中,通过插值方式对色块图像进行插值处理,提高了图像的分辨率及解析度。
请参阅图12,在某些实施方式中,步骤S46包括以下步骤:
S461:计算关联像素单元各个方向上的渐变量;
S462:计算关联像素单元各个方向上的权重;和
S463:根据渐变量及权重计算当前像素的像素值。
请参阅图13,在某些实施方式中,第二计算单元146包括第一计算子单元1461、第二计算子单元1462和第三计算单元1463。步骤S461可以由第一计算子单元1461实现,步骤S462可以由第二计算子单元1462实现,步骤S463可以由第三计算子单元1463实现。或者说,第一计算子单元1461用于计算关联像素单元各个方向上的渐变量,第二计算子单元1462用于计算关联像素单元各个方向上的权重,第三计算单元1463用于根据渐变量及权重计算当前像素的像素值。
具体地,插值处理方式是参考图像在不同方向上的能量渐变,将与当前像素对应的颜色相同且相邻的关联像素单元依据在不同方向上的渐变权重大小,通过线性插值的方式计算得到当前像素的像素值。其中,在能量变化量较小的方向上,参考比重较大,因此,在插值计算时的权重较大。
在某些示例中,为方便计算,仅考虑水平和垂直方向。
R5’5’由R44、R74、R47和R77插值得到,而在水平和垂直方向上并不存在颜色相同的原始像素,因此需首根据关联像素单元计算在水平和垂直方向上该颜色的分量。其中,水平方向上的分量为R45和R75、垂直方向的分量为R54和R57可以分别通过R44、R74、R47和R77计算得到。
具体地,R45=R44*2/3+R47*1/3,R75=2/3*R74+1/3*R77,R54=2/3*R44+1/3*R74,R57=2/3*R47+1/3*R77。
然后,分别计算在水平和垂直方向的渐变量及权重,也即是说,根据该颜色在不同方向的渐变量,以确定在插值时不同方向的参考权重,在渐变量小的方向,权重较大,而在渐变量较大的方向,权重较小。其中,在水平方向的渐变量X1=|R45-R75|,在垂直方向上的渐变量X2=|R54-R57|,W1=X1/(X1+X2),W2=X2/(X1+X2)。
如此,根据上述可计算得到,R5’5’=(2/3*R45+1/3*R75)*W2+(2/3*R54+1/3*R57)*W1。可以理解,若X1大于X2,则W1大于W2,因此计算时水平方向的权重为W2,而垂直方向的权重为W1,反之亦反。
如此,可根据插值方式计算得到当前像素的像素值。依据上述对关联像素的处理方式, 可将原始像素转化为呈典型拜耳阵列排布的低亮像素,也即是说,相邻的四个2*2阵列的低亮像素包括一个红色低亮像素,两个绿色低亮像素和一个蓝色低亮像素。
需要说明的是,插值的方式包括但不限于本实施例中公开的在计算时仅考虑垂直和水平两个方向相同颜色的像素值的方式,例如还可以参考其他颜色的像素值。
请参阅图10和图14,在某些实施方式中,步骤S46前包括步骤:
S45a:对色块图像做白平衡补偿;
步骤S46后包括步骤:
S47a:对低亮图像做白平衡补偿还原。
在某些实施方式中,第二图像处理模块140包括白平衡补偿单元145a和白平衡补偿还原单元147a。步骤S45a可以由白平衡补偿单元145a实现,步骤S47a可以由白平衡补偿还原单元147a实现。或者说,白平衡补偿单元145a用于对色块图像做白平衡补偿,白平衡补偿还原单元147a用于对低亮图像做白平衡补偿还原。
具体地,在一些示例中,在将色块图像转化为低亮图像的过程中,在插值过程中,红色和蓝色低亮像素往往不仅参考与其颜色相同的通道的原始像素的颜色,还会参考绿色通道的原始像素的颜色权重,因此,在插值前需要进行白平衡补偿,以在插值计算中排除白平衡的影响。为了不破坏色块图像的白平衡,因此,在插值之后需要将低亮图像进行白平衡补偿还原,还原时根据在补偿中红色、绿色及蓝色的增益值进行还原。
如此,可排除在插值过程中白平衡的影响,并且能够使得插值后得到的低亮图像保持色块图像的白平衡。
请再次参阅图10和图14,在某些实施方式中,步骤S46前包括步骤:
S45b:对色块图像做坏点补偿。
在某些实施方式中,第二图像处理模块140包括坏点补偿模块145b。步骤S45b可以由坏点补偿模块145b实现。或者说,坏点补偿模块145b用于对色块图像做坏点补偿。
可以理解,受限于制造工艺,图像传感器200可能会存在坏点,坏点通常不随感光度变化而始终呈现同一颜色,坏点的存在将影响图像质量,因此,为保证插值的准确,不受坏点的影响,需要在插值前进行坏点补偿。
具体地,坏点补偿过程中,可以对原始像素进行检测,当检测到某一原始像素为坏点时,可根据其所在的图像像素单元的其他原始像的像素值进行坏点补偿。
如此,可排除坏点对插值处理的影响,提高图像质量。
请再次参阅图10和图14,在某些实施方式中,步骤S46前包括步骤:
S45c:对色块图像做串扰补偿。
在某些实施方式中,第二图像处理模块140包括串扰补偿模块145c。步骤S45c可以由 串扰补偿模块145c实现。或者说,串扰补偿模块145c用于对色块图像做串扰补偿。
具体的,一个感光像素单元中的四个感光像素覆盖同一颜色的滤光片,而感光像素之间可能存在感光度的差异,以至于以低亮图像转化输出的真彩图像中的纯色区域会出现固定型谱噪声,影响图像的质量。因此,需要对色块图像进行串扰补偿。
请参阅图15,如上述解释说明,进行串扰补偿,需要在图像传感器200制造过程中设定补偿参数,并将串扰补偿的相关参数预置于成像装置的存储器中或装设成像装置的电子装置1000例如手机或平板电脑中。
在某些实施方式中,设定补偿参数包括以下步骤:
S451:提供预定光环境;
S452:设置成像装置的成像参数;
S453:拍摄多帧图像;
S454:处理多帧图像以获得串扰补偿参数;和
S455:将串扰补偿参数保存在所述图像处理装置内。
预定光环境例如可包括LED匀光板,5000K左右的色温,亮度1000勒克斯左右,成像参数可包括增益值,快门值及镜头位置。设定好相关参数后,进行串扰补偿参数的获取。
处理过程中,首先在设定的光环境中以设置好的成像参数,获取多张色块图像,并合并成一张色块图像,如此可减少以单张色块图像作为校准基础的噪声影响。
请参阅图16,以图16中的图像像素单元Gr为例,其包括Gr1、Gr2、Gr3和Gr4,串扰补偿目的在于将感光度可能存在差异的感光像素通过补偿基本校准至同一水平。该图像像素单元的平均像素值为Gr_avg=(Gr1+Gr2+Gr3+Gr4)/4,可基本表征这四个感光像素的感光度的平均水平,以此平均值作为基础值,分别计算Gr1/Gr_avg,Gr2/Gr_avg,Gr3/Gr_avg和Gr4/Gr_avg,可以理解,通过计算每一个原始像素的像素值与该图像像素单元的平均像素值的比值,可以基本反映每个原始像素与基础值的偏差,记录四个比值并作为补偿参数记录到相关装置的存储器中,以在成像时进行调取对每个原始像素进行补偿,从而减少串扰,提高图像质量。
通常,在设定串扰补偿参数后还应当验证所设定的参数是否准确。
验证过程中,首先以相同的光环境和成像参数获取一张色块图像,依据计算得到的补偿参数对该色块图像进行串扰补偿,计算补偿后的Gr’_avg、Gr’1/Gr’_avg、Gr’2/Gr’_avg、Gr’3/Gr’_avg和Gr’4/Gr’_avg。根据计算结果判断补偿参数是否准确,判断可根据宏观与微观两个角度考虑。微观是指某一个原始像素在补偿后仍然偏差较大,成像后易被使用者感知,而宏观则从全局角度,也即是在补偿后仍存在偏差的原始像素的总数目较多时,此时即便单独的每一个原始像素的偏差不大,但作为整体仍然会被使用者感知。 因此,针对微观设置一个比例阈值即可,针对宏观需设置一个比例阈值和一个数量阈值。如此,可对设置的串扰补偿参数进行验证,确保补偿参数的正确,以减少串扰对图像质量的影响。
请参阅图10和图14,在某些实施方式中,步骤S46后还包括步骤:
S47b:对低亮图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
在某些实施方式中,第二图像处理模块140包括处理单元147b。步骤S47b可以由处理单元147b实现,或者说,处理单元147b用于对低亮图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
可以理解,将色块图像转化为低亮图像后,低亮像素排布为典型的拜耳阵列,可采用处理单元147b进行处理,处理过程中包括镜片阴影校正、去马赛克、降噪和边缘锐化处理,如此,处理后的图像可用于与高亮图像进行合成,从而得到HDR图像。
请参阅图17,本发明实施方式的电子装置1000包括壳体300、处理器400、存储器500、电路板600、电源电路700和成像装置。其中,成像装置包括图像传感器200,图像传感器200包括感光像素单元阵列和设置在感光像素单元阵列上的滤光片单元阵列,每个滤光片单元覆盖对应一个感光像素单元,每个感光像素单元包括多个感光像素,电路板600安置在壳体300围成的空间内部,处理器400和存储器500设置在电路板上;电源电路700用于为电子装置1000的各个电路或器件供电;存储器500用于存储可执行程序代码;处理器400通过读取存储器500中存储的可执行程序代码来运行与可执行程序代码对应的程序以实现上述的本发明任一实施方式的控制方法。在此过程中,处理器400用于执行以下步骤:
控制感光像素单元阵列曝光并输出合并图像,合并图像包括预定阵列排布的合并像素,同一感光像素单元的多个感光像素合并输出作为一个合并像素;
控制感光像素单元阵列曝光并输出色块图像,色块图像包括预定阵列排布的原始像素,每个感光像素对应一个原始像素;
通过缩放计算方式将合并图像转换成高亮图像,高亮图像包括预定阵列排布的高亮像素;
通过插值计算方式将色块图像转换成低亮图像,所述低亮图像包括预定阵列排布的低亮像素;和
合并高亮图像和低亮图像以得到宽动态范围图像。
在某些实施方式中,色块图像包括预定阵列排布的图像像素单元,图像像素单元包括多个颜色相同的原始像素,低亮图像包括预定阵列的低亮像素,低亮像素包括当前像素,原始像素包括与当前像素对应的关联像素,处理器400用于执行以下步骤:
判断当前像素的颜色与关联像素的颜色是否相同;
在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;和
在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过插值方式计算当前像素的像素值,图像像素单元的阵列包括关联像素单元,关联像素单元与当前像素颜色相同且与当前像素相邻。
在某些实施方式中,处理器400用于执行以下步骤:
计算关联像素各个方向上的渐变量;
计算关联像素各个方向上的权重;和
根据渐变量及权重计算当前像素的像素值。
在某些实施方式中,处理器400用于执行以下步骤:
对色块图像做白平衡补偿;
对低亮图像做白平衡补偿还原。
在某些实施方式中,处理器400用于执行以下步骤:
对色块图像做坏点补偿。
在某些实施方式中,处理器400用于执行以下步骤:
对色块图像做串扰补偿。
在某些实施方式中,处理器400用于执行以下步骤:
对低亮图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
需要说明的是,前述对控制方法和控制装置100的解释说明也适用于本发明实施方式的电子装置1000,此处不再赘述。
本发明实施方式的计算机可读存储介质,具有存储于其中的指令,当电子装置1000的处理器400执行指令时,电子装置1000执行本发明实施方式的控制方法,前述对控制方法和控制装置100的解释说明也适用于本发明实施方式的计算机可读存储介质,此处不再赘述。
综上所述,本发明实施方式的电子装置1000和计算机可读存储介质,利用图像传感器不同图像输出模式存在亮度差的特性,采用两种模式下的图像进行宽动态范围图像的合成,减少合成处理的时间,提高效率。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技 术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本发明的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中, 该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (20)

  1. 一种控制方法,用于控制电子装置,其特征在于,所述电子装置包括成像装置和显示器,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素;所述控制方法包括以下步骤:
    控制所述感光像素单元阵列曝光并输出合并图像,所述合并图像包括预定阵列排布的合并像素,同一所述感光像素单元的多个所述感光像素合并输出作为一个合并像素;
    控制所述感光像素单元阵列曝光并输出色块图像,所述色块图像包括预定阵列排布的原始像素,每个所述感光像素对应一个所述原始像素;
    通过缩放计算方式将所述合并图像转换成高亮图像,所述高亮图像包括预定阵列排布的高亮像素;
    通过插值计算方式将所述色块图像转换成低亮图像,所述低亮图像包括预定阵列排布的低亮像素;和
    合并所述高亮图像和所述低亮图像以得到宽动态范围图像。
  2. 如权利要求1所述的控制方法,其特征在于,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个颜色相同的所述原始像素,所述低亮图像包括预定阵列的低亮像素,所述低亮像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述通过插值计算方式将所述色块图像转换成低亮图像的步骤包括:
    判断所述当前像素的颜色与所述关联像素的颜色是否相同;
    在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;和
    在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过插值方式计算所述当前像素的像素值,所述图像像素单元的阵列包括所述关联像素单元,所述关联像素单元与所述当前像素颜色相同且与所述当前像素相邻。
  3. 如权利要求1或2所述的控制方法,其特征在于,所述预定阵列包括拜耳阵列。
  4. 如权利要求2所述的控制方法,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
  5. 如权利要求2所述的控制方法,其特征在于,所述根据关联像素单元的像素值通过 插值方式计算当前像素的像素值的步骤包括以下步骤:
    计算所述关联像素各个方向上的渐变量;
    计算所述关联像素各个方向上的权重;和
    根据所述渐变量及所述权重计算所述当前像素的像素值。
  6. 如权利要求2所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过插值方式计算所述当前像素的像素值的步骤前包括以下步骤:
    对所述色块图像做白平衡补偿;
    所述控制方法在所述根据关联像素单元的像素值通过插值方式计算所述当前像素的像素值的步骤后包括以下步骤:
    对所述低亮图像做白平衡补偿还原。
  7. 如权利要求2所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过插值方式计算所述当前像素的像素值的步骤前包括以下步骤:
    对所述色块图像做坏点补偿。
  8. 如权利要求2所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过插值方式计算所述当前像素的像素值的步骤前包括以下步骤:
    对所述色块图像做串扰补偿。
  9. 如权利要求2所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过插值方式计算所述当前像素的像素值的步骤后包括以下步骤:
    对所述低亮图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
  10. 一种控制装置,用于控制电子装置,其特征在于,所述电子装置包括成像装置和显示器,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素;所述控制装置包括:
    第一控制模块,用于控制所述感光像素单元阵列曝光并输出合并图像,所述合并图像包括预定阵列排布的合并像素,同一所述感光像素单元的多个所述感光像素合并输出作为一个合并像素;
    第二控制模块,用于控制所述感光像素单元阵列曝光并输出色块图像,所述色块图像 包括预定阵列排布的原始像素,每个所述感光像素对应一个所述原始像素;
    第一图像处理模块,用于通过缩放计算方式将所述合并图像转换成高亮图像,所述高亮图像包括预定阵列排布的高亮像素;
    第二图像处理模块,用于通过插值计算方式将所述色块图像转换成低亮图像,所述低亮图像包括预定阵列排布的低亮像素;和
    合并模块,用于合并所述高亮图像和所述低亮图像以得到宽动态范围图像。
  11. 如权利要求10所述的控制装置,其特征在于,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个颜色相同的原始像素,所述低亮图像包括预定阵列的低亮像素,所述低亮像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述第二图像处理模块包括:
    判断单元,用于判断所述当前像素的颜色与所述关联像素的颜色是否相同;
    第一计算单元,用于在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;和
    第二计算单元,用于在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过插值方式计算所述当前像素的像素值,所述像素单元包括所述关联像素单元,所述关联像素单元与所述当前像素颜色相同且与所述当前像素相邻。
  12. 如权利要求10或11所述的控制装置,其特征在于,所述预定阵列包括拜耳阵列。
  13. 如权利要求11所述的控制装置,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
  14. 如权利要求11所述的控制装置,其特征在于,所述第二计算单元包括:
    第一计算子单元,用于计算所述关联像素各个方向上的渐变量;
    第二计算子单元,用于计算所述关联像素各个方向上的权重;和
    第三计算子单元,用于根据所述渐变量及所述权重计算所述当前像素的像素值。
  15. 如权利要求11所述的控制装置,其特征在于,所述第二图像处理模块包括:
    白平衡补偿单元,用于对所述色块图像做白平衡补偿;和
    白平衡补偿还原单元,用于对所述低亮图像做白平衡补偿还原。
  16. 如权利要求11所述的控制装置,其特征在于,所述第二图像处理模块包括:
    坏点补偿单元,用于对所述色块图像做换点补偿。
  17. 如权利要求11所述的控制装置,其特征在于,所述第二图像处理模块包括:
    串扰补偿单元,用于对所述色块图像做串扰补偿。
  18. 如权利要求11所述的控制装置,其特征在于,所述第二图像处理模块包括:
    处理单元,对所述低亮图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
  19. 一种电子装置,其特征在于,包括:
    成像装置;
    显示器;和
    如权利要求10-18任意一项所述的控制装置。
  20. 一种电子装置,包括壳体、处理器、存储器、电路板、电源电路和成像装置,其特征在于,所述电路板安置在所述壳体围成的空间内部,所述处理器和所述存储器设置在所述电路板上;所述电源电路,用于为所述电子装置的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行如权利要求1至9中任一项所述的控制方法。
PCT/CN2017/085214 2016-11-29 2017-05-19 控制方法、控制装置和电子装置 WO2018099010A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611079317.4 2016-11-29
CN201611079317.4A CN106412407B (zh) 2016-11-29 2016-11-29 控制方法、控制装置及电子装置

Publications (1)

Publication Number Publication Date
WO2018099010A1 true WO2018099010A1 (zh) 2018-06-07

Family

ID=58085618

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/085214 WO2018099010A1 (zh) 2016-11-29 2017-05-19 控制方法、控制装置和电子装置

Country Status (5)

Country Link
US (1) US10531019B2 (zh)
EP (1) EP3328077B1 (zh)
CN (1) CN106412407B (zh)
ES (1) ES2769306T3 (zh)
WO (1) WO2018099010A1 (zh)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106412407B (zh) 2016-11-29 2019-06-07 Oppo广东移动通信有限公司 控制方法、控制装置及电子装置
CN106791477B (zh) * 2016-11-29 2019-07-19 Oppo广东移动通信有限公司 图像处理方法、图像处理装置、成像装置及制造方法
CN107222681B (zh) * 2017-06-30 2018-11-30 维沃移动通信有限公司 一种图像数据的处理方法和移动终端
CN108270942B (zh) * 2018-01-31 2020-09-25 威海华菱光电股份有限公司 图像扫描装置、控制图像扫描光信号的接收方法及装置
CN108419022A (zh) * 2018-03-06 2018-08-17 广东欧珀移动通信有限公司 控制方法、控制装置、计算机可读存储介质和计算机设备
CN110874829B (zh) * 2018-08-31 2022-10-14 北京小米移动软件有限公司 图像处理方法及装置、电子设备及存储介质
CN110876014B (zh) * 2018-08-31 2022-04-08 北京小米移动软件有限公司 图像处理方法及装置、电子设备及存储介质
CN111835941B (zh) * 2019-04-18 2022-02-15 北京小米移动软件有限公司 图像生成方法及装置、电子设备、计算机可读存储介质
US11570424B2 (en) * 2019-06-24 2023-01-31 Infineon Technologies Ag Time-of-flight image sensor resolution enhancement and increased data robustness using a binning module
CN110675404B (zh) * 2019-09-03 2023-03-21 RealMe重庆移动通信有限公司 图像处理方法、图像处理装置、存储介质与终端设备
CN110519485B (zh) * 2019-09-09 2021-08-31 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
KR102625261B1 (ko) 2019-10-21 2024-01-12 삼성전자주식회사 이미지 장치
CN112785533B (zh) * 2019-11-07 2023-06-16 RealMe重庆移动通信有限公司 图像融合方法、图像融合装置、电子设备与存储介质
KR20210112042A (ko) * 2020-03-04 2021-09-14 에스케이하이닉스 주식회사 이미지 센싱 장치 및 그의 동작 방법
CN111355937B (zh) * 2020-03-11 2021-11-16 北京迈格威科技有限公司 图像处理方法、装置和电子设备
CN111491111B (zh) * 2020-04-20 2021-03-26 Oppo广东移动通信有限公司 高动态范围图像处理系统及方法、电子设备和可读存储介质
CN111970451B (zh) * 2020-08-31 2022-01-07 Oppo(重庆)智能科技有限公司 图像处理方法、图像处理装置及终端设备
CN112492161B (zh) * 2020-11-30 2021-10-26 维沃移动通信有限公司 图像传感器、摄像模组和电子设备
CN112492162B (zh) * 2020-11-30 2022-04-01 维沃移动通信有限公司 图像传感器、摄像模组和电子设备
CN112437237B (zh) * 2020-12-16 2023-02-03 维沃移动通信有限公司 拍摄方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090073292A1 (en) * 2007-09-18 2009-03-19 Stmicroelectronics S.R.I. Method for acquiring a digital image with a large dynamic range with a sensor of lesser dynamic range
CN103201766A (zh) * 2010-11-03 2013-07-10 伊斯曼柯达公司 产生高动态范围图像的方法
CN103748868A (zh) * 2011-08-31 2014-04-23 索尼公司 成像设备、信号处理方法及程序
CN105578005A (zh) * 2015-12-18 2016-05-11 广东欧珀移动通信有限公司 图像传感器的成像方法、成像装置和电子装置
CN105592270A (zh) * 2015-12-18 2016-05-18 广东欧珀移动通信有限公司 图像亮度补偿方法、装置及终端设备
CN106412407A (zh) * 2016-11-29 2017-02-15 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100843087B1 (ko) * 2006-09-06 2008-07-02 삼성전자주식회사 영상 생성 장치 및 방법
US7777804B2 (en) * 2007-10-26 2010-08-17 Omnivision Technologies, Inc. High dynamic range sensor with reduced line memory for color interpolation
US7745779B2 (en) * 2008-02-08 2010-06-29 Aptina Imaging Corporation Color pixel arrays having common color filters for multiple adjacent pixels for use in CMOS imagers
TWI422020B (zh) 2008-12-08 2014-01-01 Sony Corp 固態成像裝置
US8456557B2 (en) * 2011-01-31 2013-06-04 SK Hynix Inc. Dynamic range extension for CMOS image sensors for mobile applications
JP2013038504A (ja) * 2011-08-04 2013-02-21 Sony Corp 撮像装置、および画像処理方法、並びにプログラム
JP2013066146A (ja) * 2011-08-31 2013-04-11 Sony Corp 画像処理装置、および画像処理方法、並びにプログラム
AU2012374649A1 (en) * 2012-03-27 2014-09-11 Sony Corporation Image processing device, image-capturing element, image processing method, and program
CN103531603B (zh) * 2013-10-30 2018-10-16 上海集成电路研发中心有限公司 一种cmos图像传感器
US9479695B2 (en) * 2014-07-31 2016-10-25 Apple Inc. Generating a high dynamic range image using a temporal filter
CN105472266A (zh) * 2015-12-18 2016-04-06 广东欧珀移动通信有限公司 高动态范围图像的生成方法、拍照装置和终端

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090073292A1 (en) * 2007-09-18 2009-03-19 Stmicroelectronics S.R.I. Method for acquiring a digital image with a large dynamic range with a sensor of lesser dynamic range
CN103201766A (zh) * 2010-11-03 2013-07-10 伊斯曼柯达公司 产生高动态范围图像的方法
CN103748868A (zh) * 2011-08-31 2014-04-23 索尼公司 成像设备、信号处理方法及程序
CN105578005A (zh) * 2015-12-18 2016-05-11 广东欧珀移动通信有限公司 图像传感器的成像方法、成像装置和电子装置
CN105592270A (zh) * 2015-12-18 2016-05-18 广东欧珀移动通信有限公司 图像亮度补偿方法、装置及终端设备
CN106412407A (zh) * 2016-11-29 2017-02-15 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置

Also Published As

Publication number Publication date
EP3328077A1 (en) 2018-05-30
CN106412407A (zh) 2017-02-15
US10531019B2 (en) 2020-01-07
EP3328077B1 (en) 2020-01-01
ES2769306T3 (es) 2020-06-25
US20180152646A1 (en) 2018-05-31
CN106412407B (zh) 2019-06-07

Similar Documents

Publication Publication Date Title
WO2018099010A1 (zh) 控制方法、控制装置和电子装置
WO2018099031A1 (zh) 控制方法和电子装置
WO2018099008A1 (zh) 控制方法、控制装置及电子装置
WO2018099007A1 (zh) 控制方法、控制装置及电子装置
WO2018098977A1 (zh) 图像处理方法、图像处理装置、成像装置、制造方法和电子装置
WO2018099005A1 (zh) 控制方法、控制装置及电子装置
WO2018099012A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
WO2018099006A1 (zh) 控制方法、控制装置及电子装置
WO2018099011A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
WO2018098982A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
WO2018099030A1 (zh) 控制方法和电子装置
WO2018098981A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2018098984A1 (zh) 控制方法、控制装置、成像装置及电子装置
WO2018099009A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2018098978A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2018098983A1 (zh) 图像处理方法及装置、控制方法及装置、成像及电子装置
JP2009290795A (ja) 画像処理装置、画像処理方法、画像処理プログラム、記録媒体、および電子情報機器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17877173

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17877173

Country of ref document: EP

Kind code of ref document: A1