WO2018099008A1 - 控制方法、控制装置及电子装置 - Google Patents

控制方法、控制装置及电子装置 Download PDF

Info

Publication number
WO2018099008A1
WO2018099008A1 PCT/CN2017/085212 CN2017085212W WO2018099008A1 WO 2018099008 A1 WO2018099008 A1 WO 2018099008A1 CN 2017085212 W CN2017085212 W CN 2017085212W WO 2018099008 A1 WO2018099008 A1 WO 2018099008A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
merged
array
unit
Prior art date
Application number
PCT/CN2017/085212
Other languages
English (en)
French (fr)
Inventor
唐城
Original Assignee
广东欧珀移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东欧珀移动通信有限公司 filed Critical 广东欧珀移动通信有限公司
Publication of WO2018099008A1 publication Critical patent/WO2018099008A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4015Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/62Detection or reduction of noise due to excess charges produced by the exposure, e.g. smear, blooming, ghost image, crosstalk or leakage between pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/77Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase

Definitions

  • the present invention relates to imaging technology, and in particular, to a control method, a control device, and an electronic device.
  • An existing image sensor includes an array of photosensitive pixel units and an array of filter cells disposed on the array of photosensitive pixel units, each array of filter cells covering a corresponding one of the photosensitive pixel units, each photosensitive pixel unit including a plurality of photosensitive pixels Pixel.
  • Combining the images includes merging the pixel arrays, and the plurality of photosensitive pixels of the same photosensitive pixel unit are combined to output as corresponding merged pixels. In this way, the signal-to-noise ratio of the merged image can be improved, however, the resolution of the merged image is lowered.
  • the image sensor may also be controlled to output a high-pixel patch image
  • the patch image includes an original pixel array, and each photosensitive pixel corresponds to one original pixel.
  • the plurality of original pixels corresponding to the same filter unit have the same color, the resolution of the patch image cannot be improved. Therefore, it is necessary to convert the high-pixel patch image into a high-pixel imitation original image by interpolation calculation, and the pseudo-original image may include a pseudo-origin pixel arranged in a Bell array.
  • the original image can be converted into an original true color image by image processing and saved.
  • interpolation calculations are resource intensive and time consuming, and not all scenarios are applicable or need to output a pseudo-real color image.
  • Embodiments of the present invention provide a control method, a control device, and an electronic device.
  • the present invention provides a control method for controlling an electronic device, the electronic device including an imaging device and a display, the imaging device including an image sensor, the image sensor including an array of photosensitive pixel units and an array disposed on the photosensitive pixel unit An array of filter cells, each of the filter cell arrays covering a corresponding one of the photosensitive pixel units, each of the photosensitive pixel units comprising a plurality of photosensitive pixels, the control method comprising the steps of:
  • the image sensor outputs a merged image
  • the merged image includes a merged pixel array, and a plurality of photosensitive pixels of the same photosensitive pixel unit are combined and output as one of the merged pixels;
  • the merged image is converted into a merged true color image.
  • the present invention provides a control device for controlling an electronic device, the electronic device including an imaging device and a display, the imaging device including an image sensor, the image sensor including an array of photosensitive pixel units and an array disposed on the photosensitive pixel unit An array of filter cells, each of the filter cell arrays covering a corresponding one of the photosensitive pixel units, each of the photosensitive pixel units comprising a plurality of photosensitive pixels, the control device comprising:
  • the first control module is configured to control the image sensor to output a merged image, the merged image includes a merged pixel array, and a plurality of photosensitive pixels of the same photosensitive pixel unit are combined and output as one of the merged pixels ;
  • the dividing module is configured to divide the merged image into an analysis area arranged in an array
  • a calculation module configured to calculate a phase difference of each of the analysis regions
  • the merging module is configured to merge the analysis area corresponding to the predetermined condition to a focus area;
  • An identification module configured to identify whether a focus area exists in the focus area
  • the first conversion module configured to convert the merged image into a merged true color image when a human face is present.
  • the present invention provides an electronic device including an imaging device, a display, and the control device.
  • the invention provides an electronic device comprising:
  • An imaging device comprising an image sensor, the image sensor comprising an array of photosensitive pixel units and an array of filter elements disposed on the array of photosensitive pixel units, each of the arrays of filter elements covering a corresponding one a photosensitive pixel unit, each of the photosensitive pixel units comprising a plurality of photosensitive pixels;
  • a processor for:
  • the image sensor outputs a merged image
  • the merged image includes a merged pixel array, and a plurality of photosensitive pixels of the same photosensitive pixel unit are combined and output as one of the merged pixels;
  • the merged image is converted into a merged true color image.
  • control method, the control device, and the electronic device of the embodiment of the present invention recognize and judge an image in a depth of field, Controlling the image sensor to output a suitable image avoids a large amount of work caused by the image sensor fixedly outputting a high-quality image, thereby reducing the working time of the electronic device, improving the working power, and improving user satisfaction.
  • FIG. 1 is a schematic flow chart of a control method according to an embodiment of the present invention.
  • FIG. 2 is a block diagram of a control device according to an embodiment of the present invention.
  • FIG. 3 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 4 is another schematic flowchart of a control method according to an embodiment of the present invention.
  • FIG. 5 is another block diagram of a control device according to an embodiment of the present invention.
  • FIG. 6 is a schematic flow chart of still another control method according to an embodiment of the present invention.
  • FIG. 7 is still another schematic flowchart of a control method according to an embodiment of the present invention.
  • FIG. 8 is a block diagram showing still another module of the control device according to the embodiment of the present invention.
  • FIG. 9 is a schematic block diagram of a third conversion module according to an embodiment of the present invention.
  • FIG. 10 is a schematic block diagram of an image sensor according to an embodiment of the present invention.
  • FIG. 11 is a circuit diagram of an image sensor according to an embodiment of the present invention.
  • Figure 12 is a schematic view of a filter unit according to an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of an image sensor according to an embodiment of the present invention.
  • FIG. 14 is a schematic diagram of a state of a merged image according to an embodiment of the present invention.
  • 15 is a schematic diagram showing a state of a patch image according to an embodiment of the present invention.
  • 16 is a schematic diagram showing the state of a control method according to an embodiment of the present invention.
  • 17 is another schematic flow chart of a control method according to an embodiment of the present invention.
  • FIG. 18 is a schematic block diagram of a second computing unit according to an embodiment of the present invention.
  • FIG. 19 is still another schematic flowchart of a control method according to an embodiment of the present invention.
  • FIG. 20 is another schematic block diagram of a third conversion module according to an embodiment of the present invention.
  • 21 is a schematic diagram of an image pixel unit of a patch image according to an embodiment of the present invention.
  • FIG. 22 is still another schematic flowchart of a control method according to an embodiment of the present invention.
  • FIG. 23 is another schematic block diagram of a third conversion module according to an embodiment of the present invention.
  • FIG. 24 is still another schematic flowchart of a control method according to an embodiment of the present invention.
  • 25 is a schematic block diagram of a first conversion module according to an embodiment of the present invention.
  • 26 is another schematic block diagram of an electronic device according to an embodiment of the present invention.
  • the electronic device 1000 the imaging device 100, the image sensor 10, the photosensitive pixel unit array 12, the photosensitive pixel unit 12a, the photosensitive pixel subunit 120, the photosensitive pixel 122, the photosensitive device 1222, the transfer tube 1224, the source follower 124, and the analog to digital conversion 126, adder 128, filter unit array 14, filter unit 14a, control device 200, first control module 211, partition module 212, calculation module 213, merge module 214, identification module 215, first conversion module 216.
  • the determining module 217, the second converting module 218, the second controlling module 219, the third converting module 220, the fourth converting module 230, the determining unit 221, the first calculating unit 222, the second calculating unit 223, and the first calculating unit The unit 2232, the second calculation subunit 2234, the third calculation subunit 2236, the white balance compensation unit 224, the white balance compensation restoration unit 225, the dead point compensation unit 226, the crosstalk compensation unit 227, the processing unit 228, and the first conversion unit 2162
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include one or more of the described features either explicitly or implicitly.
  • the meaning of "a plurality" is two or more unless specifically and specifically defined otherwise.
  • connection In the description of the present invention, it should be noted that the terms “installation”, “connected”, and “connected” are to be understood broadly, and may be fixed or detachable, for example, unless otherwise explicitly defined and defined. Connected, or integrally connected; may be mechanically connected, or may be electrically connected or may communicate with each other; may be directly connected or indirectly connected through an intermediate medium, may be internal communication of two elements or interaction of two elements relationship. For those skilled in the art, the specific meanings of the above terms in the present invention can be understood on a case-by-case basis.
  • a control method of an embodiment of the present invention is used to control an electronic device 1000.
  • the electronic device 1000 includes an imaging device 100 and a display.
  • the imaging device 100 includes an image sensor 10, and the image sensor 10 includes a photosensitive pixel unit array 12 and is disposed at The filter cell array 14 on the photosensitive pixel unit array 12, each of the filter cell arrays 14 covers a corresponding one of the photosensitive pixel units 12a, each of the photosensitive pixel units 12a includes a plurality of photosensitive pixels 122, and the control method comprises the following steps:
  • S211 Control the image sensor to output a merged image, the merged image comprises a merged pixel array, and the plurality of photosensitive pixels of the same photosensitive pixel unit are combined and output as one merged pixel;
  • a control device 200 is used to control an electronic device 1000.
  • the electronic device 1000 includes an imaging device 100 and a display.
  • the imaging device 100 includes an image sensor 10, and the image sensor 10 includes a photosensitive pixel unit array 12 and a setting.
  • each filter cell array 14 on the photosensitive pixel unit array 12 covers a corresponding one of the photosensitive pixel units 12a, each of the photosensitive pixel units 12a includes a plurality of photosensitive pixels 122, and the control device 200 includes the first The control module 211, the partitioning module 212, the calculation module 213, the merge module 214, the identification module 215, and the first conversion module 216.
  • the first control module 211 is configured to control the image sensor 10 to output a merged image, and the merged image includes a merged pixel array, and the plurality of light sensitive pixels 122 of the same photosensitive pixel unit 12a are combined and output as one merged pixel.
  • the dividing module 212 is configured to divide the merged image into an analysis area arranged in an array.
  • the calculation module 213 is for calculating the phase difference of each analysis area.
  • the merging module 214 is configured to merge the analysis regions corresponding to the predetermined conditions with the phase difference into the focus regions.
  • the identification module 215 is configured to identify whether a focus area has a human face.
  • the first conversion module 216 is configured to convert the merged image into a merged true color image when a human face is present.
  • step S211 can be implemented by the first control module 211
  • step S212 can be implemented by the partitioning module 212
  • step S213 can be implemented by the computing module 213
  • step S214 can be implemented by the merging module 214
  • step S215 can be implemented by the identifying module 215.
  • step S216 can be implemented by the first conversion module 216.
  • an electronic device 1000 includes an imaging device 100, a display, and a control device 200 according to an embodiment of the present invention. That is, the control device 200 according to an embodiment of the present invention may be applied to an embodiment of the present invention.
  • Electronic device 1000 includes an imaging device 100, a display, and a control device 200 according to an embodiment of the present invention. That is, the control device 200 according to an embodiment of the present invention may be applied to an embodiment of the present invention.
  • Electronic device 1000 includes an imaging device 100, a display, and a control device 200 according to an embodiment of the present invention. That is, the control device 200 according to an embodiment of the present invention may be applied to an embodiment of the present invention.
  • Electronic device 1000 includes an imaging device 100, a display, and a control device 200 according to an embodiment of the present invention. That is, the control device 200 according to an embodiment of the present invention may be applied to an embodiment of the present invention.
  • the control method, the control device 200 and the electronic device 1000 of the present invention control the image sensor 10 to output an appropriate image by recognizing and judging the image in the depth of field, thereby avoiding a large amount of work caused by the image sensor 10 fixedly outputting a high-quality image, thereby reducing the electrons.
  • the device 1000 works for a long period of time, increases working power, and improves user satisfaction.
  • the electronic device 1000 includes an electronic device having an imaging device such as a mobile phone or a tablet computer, and is not limited herein.
  • the electronic device 1000 of the embodiment of the present invention is a mobile phone.
  • imaging device 100 includes a front camera or a rear camera.
  • step S212 may be to divide the merged image into M*N analysis regions, and thus, the phase difference of each analysis region may be calculated by step S213, so that the focus region is obtained according to the judgment of the phase difference, that is, An image that is within the depth of field.
  • Images in the depth of field are clear images and have a high reference value for image processing.
  • the image outside the depth of field is a blurred image, and the reference value for whether or not to perform image processing is not high. Therefore, the image in the depth of field is analyzed to determine whether or not the image is processed, so that the workload can be reduced while obtaining a high quality image.
  • the control device 200 controls the image sensor 10 to output a merged image, and The merged image is converted into a merged true color image.
  • control method includes the following steps:
  • S217 When there is no face, determine whether the brightness of the focus area is less than or equal to the brightness threshold, whether the green ratio is less than or equal to the ratio threshold, and whether the spatial frequency is less than or equal to the frequency threshold;
  • S218 Convert the merged image into a merged true color image when the brightness of the focus area is less than or equal to the brightness threshold, the green ratio is less than or equal to the ratio threshold, and the spatial frequency is less than or equal to the frequency threshold.
  • the control device 200 includes a determination module 217 and a second conversion module 218.
  • the determining module 217 is configured to determine, when there is no human face, whether the brightness of the focus area is less than or equal to the brightness threshold, whether the green ratio is less than or equal to the ratio threshold, and whether the spatial frequency is less than or equal to the frequency threshold.
  • the second conversion module 218 is configured to convert the merged image into a merged true color image when the brightness of the focus area is less than or equal to the brightness threshold, the green ratio is less than or equal to the ratio threshold, and the spatial frequency is less than or equal to the frequency threshold.
  • the feature points of the landscape image are brightness, green ratio, and spatial frequency. Therefore, by comparing the above feature points with corresponding threshold values, it can be determined whether the image is a landscape image, because the landscape image needs to be high.
  • the image quality and the ambient brightness are relatively high, so when no landscape image is detected, the control device 200 controls the image sensor 10 to output a merged image, and converts the merged image into a merged true color image.
  • the control device 200 controls the image sensor 10 to output a patch image, and first converts the patch image into a pseudo original image, and then converts the pseudo original image into a pseudo original color image.
  • the brightness threshold, the occupancy threshold, and the frequency threshold can be entered by the user according to different environments. Different settings are made so that the user can set one or more thresholds according to the difference of the environment, so as to achieve a better shooting effect.
  • the brightness threshold, the occupancy threshold, and the frequency threshold may also be a plurality of different thresholds stored in the memory of the electronic device 1000 to provide user selection without any limitation.
  • control method includes the following steps:
  • the image sensor is controlled to output a patch image
  • the patch image includes an image pixel unit arranged in a predetermined array, and the image pixel unit Include a plurality of original pixels, each photosensitive pixel corresponding to one original pixel;
  • S220 Convert the color block image into a pseudo original image, where the original image includes an original pixel arranged in an array, the original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel;
  • step S220 includes the following steps:
  • the pixel value of the current pixel is calculated according to the pixel value of the associated pixel unit by using a first interpolation algorithm, where the image pixel unit includes the associated pixel unit, the color of the associated pixel unit and the current pixel. Same and adjacent to the current pixel.
  • the control device 200 includes a second control module 219 , a third conversion module 220 , and a fourth conversion module 230 .
  • the second control module 219 is configured to control the image sensor 10 to output a patch image when the brightness of the focus area is greater than the brightness threshold, the green ratio is greater than the ratio threshold, and the spatial frequency is greater than the frequency threshold, and the patch image includes the image of the predetermined array arrangement.
  • a pixel unit the image pixel unit includes a plurality of original pixels, and each of the photosensitive pixels corresponds to one original pixel.
  • the third conversion module 220 is configured to convert the color block image into a pseudo original image, where the original image includes an array of original pixels, the original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel.
  • the third conversion module 220 includes a determination unit 221, a first calculation unit 222, and a second calculation unit 223.
  • the determining unit 221 is configured to determine whether the color of the current pixel is the same as the color of the associated pixel.
  • the first calculating unit 222 is configured to use the pixel value of the associated pixel as the pixel value of the current pixel when the color of the current pixel is the same as the color of the associated pixel.
  • the second calculating unit 223 is configured to calculate a pixel value of the current pixel by using a first interpolation algorithm according to a pixel value of the associated pixel unit when the color of the current pixel is different from a color of the associated pixel, where the image pixel unit includes an associated pixel unit, and the associated pixel unit The color is the same as the current pixel and is adjacent to the current pixel.
  • the fourth conversion module 230 is configured to convert the original image into a pseudo-real color image.
  • the control device 200 determines that the image is a landscape image, and the control device 200 controls the image sensor because the image image has a high demand for image quality. 10 Outputs the patch image, converts the patch image into a pseudo original image, and converts the original image into an original true color image, so that the user can obtain a high quality landscape image.
  • the image sensor 10 of the embodiment of the present invention includes a photosensitive pixel unit array 12 and a filter unit array 14 disposed on the photosensitive pixel unit array 12.
  • the photosensitive pixel unit array 12 includes a plurality of photosensitive pixel units 12a, each of which includes a plurality of adjacent photosensitive pixels 122.
  • Each of the photosensitive pixels 122 includes a photosensitive device 1222 and a transfer tube 1224, wherein the photosensitive device 1222 may be a photodiode, and the transfer tube 1224 may be a MOS transistor.
  • the filter unit array 14 includes a plurality of filter units 14a, each of which covers a corresponding one of the photosensitive pixel units 12a.
  • the filter cell array 14 includes a Bayer array, that is, the adjacent four filter cells 14a are respectively a red filter unit and a blue filter unit. And two green filter units.
  • Each of the photosensitive pixel units 12a corresponds to the filter unit 14a of the same color. If one photosensitive pixel unit 12a includes a total of n adjacent photosensitive devices 1222, one filter unit 14a covers n in one photosensitive pixel unit 12a.
  • the photosensitive device 1222, the filter unit 14a may be of an integral structure, or may be assembled and connected by n independent sub-filters.
  • each photosensitive pixel unit 12a includes four adjacent photosensitive pixels 122.
  • the adjacent two photosensitive pixels 122 together constitute one photosensitive pixel sub-unit 120, and the photosensitive pixel sub-unit 120 further includes a source follower.
  • the photosensitive pixel unit 12a further includes an adder 128. Wherein one end electrode of each of the transfer tubes 1224 of one photosensitive pixel sub-unit 120 is connected to the cathode electrode of the corresponding photosensitive device 1222, and the other end of each transfer tube 1224 is commonly connected to the gate electrode of the source follower 124 And connected to an analog to digital converter 126 through the source follower 124 source electrode.
  • the source follower 124 can be a MOS transistor.
  • the two photosensitive pixel subunits 120 are connected to the adder 128 through respective source followers 124 and analog to digital converters 126.
  • the adjacent four photosensitive devices 1222 in one photosensitive pixel unit 12a of the image sensor 10 of the embodiment of the present invention share a filter unit 14a of the same color, and each photosensitive device 1222 is connected to a transmission tube 1224.
  • the adjacent two photosensitive devices 1222 share a source follower 124 and an analog to digital converter 126, and the adjacent four photosensitive devices 1222 share an adder 128.
  • the adjacent four photosensitive devices 1222 are arranged in a 2*2 array.
  • the two photosensitive devices 1222 in one photosensitive pixel subunit 120 may be in the same column.
  • the pixels may be combined to output a combined image.
  • the light sensing device 1222 is used to convert light into electric charge, and the generated electric charge is proportional to the light intensity, and the transfer tube 1224 is used to control the conduction or disconnection of the circuit according to the control signal.
  • Source follower 124 when the circuit is turned on It is used to convert the charge signal generated by the light-receiving device 1222 into a voltage signal.
  • Analog to digital converter 126 is used to convert the voltage signal into a digital signal.
  • the adder 128 is for adding two digital signals together for output.
  • the image sensor 10 of the embodiment of the present invention can merge 16M photosensitive pixels 122 into 4M, or output a combined image.
  • the size of the photosensitive pixels 122 is equivalent. It is 4 times the original size, thereby increasing the sensitivity of the photosensitive pixel 122.
  • the noise in the image sensor 10 is mostly random noise, it is possible that for the photosensitive pixels 122 before the combination, there is a possibility that noise is present in one or two pixels, and the four photosensitive pixels 122 are combined into one. After the large photosensitive pixel 122, the influence of the noise on the large pixel is reduced, that is, the noise is attenuated, and the signal-to-noise ratio is improved.
  • the resolution of the combined image will also decrease as the pixel value decreases.
  • the light sensing device 1222 is used to convert light into electric charge, and the generated electric charge is proportional to the light intensity, and the transfer tube 1224 is used to control the conduction or disconnection of the circuit according to the control signal.
  • the source follower 124 is used to convert the charge signal generated by the light-receiving device 1222 into a voltage signal.
  • Analog to digital converter 126 is used to convert the voltage signal into a digital signal.
  • the image sensor 200 of the embodiment of the present invention can also hold the 16M photosensitive pixel 122 output, or output the patch image, and the patch image includes the image pixel unit, the image pixel.
  • the unit includes original pixels arranged in a 2*2 array, the original pixel having the same size as the photosensitive pixel 122, but since the filter unit 14a covering the adjacent four photosensitive devices 1222 is the same color, it is said that although four The photosensitive devices 1222 are respectively exposed, but the filter units 14a covering them are of the same color. Therefore, the adjacent four original pixels of each image pixel unit output are the same color, and the resolution of the image cannot be improved.
  • control method of the embodiment of the present invention can be used to process the output patch image to obtain a pseudo original image.
  • the device 200 processes to output a merged true color image.
  • the color patch images are respectively outputted by each of the photosensitive pixels 122. Since the adjacent four photosensitive pixels 122 have the same color, the four adjacent original pixels of one image pixel unit have the same color and are atypical Bayer arrays.
  • the control device 200 cannot directly process the atypical Bayer array, that is, when the image sensor 10 adopts the same control device 200, it is compatible with the two modes of true color image output, that is, the merged true color image output in the merge mode. And the original color image output in the color block mode, the color block image needs to be converted into a pseudo original image, or the image pixel unit of the atypical Bayer array is converted into a pixel arrangement of a typical Bayer array.
  • the original image includes imitation original pixels arranged in a Bayer array.
  • the pseudo original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel.
  • the current pixels are R3'3' and R5'5', and the corresponding associated pixels are R33 and B55, respectively.
  • the pixel values above and below should be broadly understood as the color attribute values of the pixel, such as color values.
  • the associated pixel unit includes a plurality of, for example, four, original pixels in the image pixel unit that are the same color as the current pixel and are adjacent to the current pixel.
  • the associated pixel corresponding to R5'5' is B55, which is adjacent to the image pixel unit where B55 is located and has the same color as R5'5'.
  • the image pixel units in which the associated pixel unit is located are image pixel units in which R44, R74, R47, and R77 are located, and are not other red image pixel units that are spatially farther from the image pixel unit in which B55 is located.
  • red original pixels closest to the B55 are R44, R74, R47 and R77, respectively, that is, the associated pixel unit of R5'5' is composed of R44, R74, R47 and R77, R5'5'
  • the colors are the same as and adjacent to R44, R74, R47 and R77.
  • the original pixel is converted into the original pixel in different ways, thereby converting the color block image into a pseudo original image, and a special Bayer array structure filter is adopted when the image is captured.
  • the image signal-to-noise ratio is improved, and in the image processing process, the color block image is interpolated by the interpolation algorithm, which improves the resolution and resolution of the image.
  • step S223 includes the following steps:
  • S2236 Calculate the pixel value of the current pixel according to the amount of the gradient and the weight.
  • the second computing unit 223 includes a first computing sub-unit 2232, a second computing sub-unit 2234, and a third computing sub-unit 2236.
  • Step S2232 can be implemented by the first computing sub-unit 2232
  • step S2234 can be implemented by the second computing sub-unit 2234
  • step S2236 can be implemented by the third computing sub-unit 2236.
  • the first calculation sub-unit 2232 is used to calculate the amount of gradation in each direction of the associated pixel unit
  • the second calculation sub-unit 2234 is used to calculate the weights in the respective directions of the associated pixel unit
  • the third calculation sub-unit 2236 is used to The quantity and weight calculate the pixel value of the current pixel.
  • the interpolation processing method is an energy gradation of the reference image in different directions, and the color corresponding to the current pixel is
  • the pixel pixels of the current pixel are calculated by linear interpolation according to the magnitude of the gradation weight in different directions according to the gradation weights in different directions.
  • the reference specific gravity is large, and therefore, the weight at the time of interpolation calculation is large.
  • R5'5' is interpolated from R44, R74, R47 and R77, and there are no original pixels of the same color in the horizontal and vertical directions, so the components of the color in the horizontal and vertical directions are first calculated from the associated pixel unit.
  • the components in the horizontal direction are R45 and R75
  • the components in the vertical direction are R54 and R57 which can be calculated by R44, R74, R47 and R77, respectively.
  • R45 R44*2/3+R47*1/3
  • R75 2/3*R74+1/3*R77
  • R54 2/3*R44+1/3*R74
  • R57 2/3 *R47+1/3*R77.
  • the amount of gradation and the weight in the horizontal and vertical directions are respectively calculated, that is, the gradation amount in different directions according to the color is determined to determine the reference weights in different directions at the time of interpolation, and the weight is smaller in the direction of the gradation amount. Large, and in the direction of larger gradient, the weight is smaller.
  • the gradient amount X1
  • the gradient amount X2
  • W1 X1/(X1+X2)
  • W2 X2/(X1+X2) .
  • R5'5' (2/3*R45+1/3*R75)*W2+(2/3*R54+1/3*R57)*W1. It can be understood that if X1 is greater than X2, W1 is greater than W2, so the weight in the horizontal direction is W2 when calculating, and the weight in the vertical direction is W1, and vice versa.
  • the pixel value of the current pixel can be calculated according to the interpolation algorithm.
  • the original pixels can be converted into the original pixels arranged in a typical Bayer array, that is, the adjacent original pixels of the four 2*2 arrays include one red original pixel. , two green imitation original pixels and one blue imitation original pixel.
  • the manner of interpolation includes, but is not limited to, the manner of considering only the pixel values of the same color in the vertical and horizontal directions in the calculation disclosed in the embodiment, for example, the pixel values of other colors may also be referred to.
  • step S223 includes steps:
  • Step S223 includes steps:
  • the third conversion module 220 includes a white balance compensation unit 224 and a white balance compensation reduction unit 225.
  • Step S224 may be implemented by the white balance compensation unit 224
  • step S225 may be implemented by the white balance compensation restoration unit 225.
  • the white balance compensation unit 224 is configured to perform white balance compensation on the patch image
  • the white balance compensation and restoration unit 225 is configured to perform white balance compensation and restoration on the original image.
  • the red and blue pseudo-pixels in the process of converting the patch image into the original image, in the interpolation process, often refer not only to the color of the original pixel of the channel with the same color, but also Reference green pass The color weight of the original pixels of the track, therefore, white balance compensation is required before interpolation to exclude the effects of white balance in the interpolation calculation. In order not to destroy the white balance of the patch image, it is necessary to perform white balance compensation reduction after the interpolation, and restore according to the gain values of red, green and blue in the compensation.
  • step S223 includes steps:
  • the third conversion module 220 includes a dead point compensation unit 226.
  • Step S226 can be implemented by the dead point compensation unit 226.
  • the dead point compensation unit 226 is used to perform dead point compensation on the patch image.
  • the image sensor 10 may have a dead pixel.
  • the bad point usually does not always show the same color as the sensitivity changes, and the presence of the dead pixel will affect the image quality. Therefore, in order to ensure the accuracy of the interpolation, it is not The effect of the dead point requires bad point compensation before interpolation.
  • the original pixel may be detected.
  • the pixel compensation may be performed according to the pixel value of the other original image of the image pixel unit in which it is located.
  • step S223 includes steps:
  • the third conversion module 220 includes a crosstalk compensation unit 227.
  • Step S227 can be implemented by the crosstalk compensation unit 227.
  • the crosstalk compensation unit 227 is configured to perform crosstalk compensation on the patch image.
  • the four photosensitive pixels 122 in one photosensitive pixel unit 12a cover the filters of the same color, and there may be a difference in sensitivity between the photosensitive pixels 122, so that the original color image is converted and outputted by the original image.
  • the solid color area fixed spectral noise occurs, which affects the quality of the image. Therefore, it is necessary to perform crosstalk compensation on the patch image.
  • setting the compensation parameter comprises the steps of: providing a predetermined light environment; setting an imaging parameter of the imaging device; capturing a multi-frame image; processing the multi-frame image to obtain a crosstalk compensation parameter; and saving the crosstalk compensation parameter in the In the image processing device.
  • the predetermined light environment may include, for example, an LED homogenizing plate, a color temperature of about 5000 K, and a brightness of about 1000 lux.
  • the imaging parameters may include a gain value, a shutter value, and a lens position. After the relevant parameters are set, the crosstalk compensation parameters are acquired.
  • a plurality of color patch images are acquired with the set imaging parameters in the set light environment, and merged into one patch image, thereby reducing the noise influence based on the single patch image as a basis for calibration.
  • the crosstalk compensation is aimed at calibrating the photosensitive pixels whose sensitivity may be different to the same level by the compensation.
  • Gr_avg, Gr2/Gr_avg, Gr3/Gr_avg and Gr4/Gr_avg it can be understood that by calculating the ratio of the pixel value of each original pixel to the average pixel value of the image pixel unit, the deviation of each original pixel from the base value can be basically reflected. Four ratios are recorded and recorded as compensation parameters in the memory of the relevant device to compensate for each original pixel during imaging, thereby reducing crosstalk and improving image quality.
  • a patch image is first acquired with the same light environment and imaging parameters, and the patch image is crosstalk compensated according to the calculated compensation parameter, and the compensated Gr'_avg, Gr'1/Gr'_avg is calculated. , Gr'2/Gr'_avg, Gr'3/Gr'_avg and Gr'4/Gr'_avg. According to the calculation result, it is judged whether the compensation parameter is accurate, and the judgment can be considered according to the macroscopic and microscopic perspectives.
  • Microscopic means that a certain original pixel still has a large deviation after compensation, and is easily perceived by the user after imaging, while the macroscopic view is from a global angle, that is, when the total number of original pixels still having deviation after compensation is large, Even if the deviation of each original pixel is small, it is still perceived by the user as a whole. Therefore, it should be set to set a proportional threshold for the micro, and set a proportional threshold and a quantity threshold for the macro. In this way, the set crosstalk compensation parameters can be verified to ensure correct compensation parameters to reduce the impact of crosstalk on image quality.
  • the step S223 further includes the following steps:
  • S228 performing lens shading correction, demosaicing, noise reduction, and edge sharpening on the original image.
  • the third conversion module 220 includes a processing unit 228.
  • Step S228 may be implemented by the processing unit 228, or the processing unit 228 may be configured to perform lens shading correction, demosaicing, noise reduction, and edge sharpening processing on the pseudo original image.
  • the original pixel arrangement is a typical Bayer array, which can be processed by the processing device 200, including lens shadow correction, demosaicing, noise reduction and edge sharpening. Processing, in this way, after processing, the imitation original true color image can be output to the user.
  • step S216 includes the following steps:
  • S2162 converting the merged image into a restored image corresponding to the original image by using a second interpolation algorithm, wherein the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm;
  • the first conversion module 216 includes a first conversion unit 2162 and a second conversion unit 2164.
  • the first converting unit 2162 is configured to convert the merged image into a restored image corresponding to the original image by using the second interpolation algorithm, and the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm.
  • the second conversion unit 2164 is configured to convert the restored image into a merge True color image. That is to say, step S2162 can be implemented by the first conversion unit 2162, and step S2164 can be implemented by the second conversion unit 2164.
  • the second interpolation algorithm has less time complexity and spatial complexity than the first interpolation algorithm.
  • the complexity of the algorithm includes time complexity and space complexity.
  • the time complexity is used to measure the time it takes for the algorithm.
  • the space complexity is used to measure the storage space that the algorithm needs. The small time complexity indicates that the algorithm takes less time, and the small space complexity indicates that the algorithm requires less storage space. Therefore, the second interpolation algorithm is beneficial to improve the operation speed, making the photographing process smoother and improving the user experience.
  • the second interpolation algorithm may simply magnify the merged image four times without the need for other complex operations to obtain a restored image corresponding to the original image.
  • the noise reduction and edge sharpening processing can be performed on the original true color image, so that the quality can be better after the processing.
  • the original imitation true color image is output to the user.
  • the electronic device 1000 of the embodiment of the present invention further includes a circuit board 500, a processor 600, a memory 700, and a power supply circuit 800.
  • the circuit board 500 is disposed inside the space of the electronic device 1000.
  • the processor 600 and the memory 700 are disposed on the circuit board 500.
  • the power circuit 800 is used to supply power to various circuits or devices of the electronic device 1000.
  • the memory 700 is for storing executable program code.
  • the processor 600 runs a program corresponding to the executable program code by reading the executable program code stored in the memory 700 for executing the control method of the above embodiment.
  • the processor 600 is configured to perform the following steps:
  • the merged image comprises a merged pixel array, and the plurality of photosensitive pixels of the same photosensitive pixel unit are combined and output as one merged pixel;
  • the merged image is converted into a merged true color image.
  • control method and the control device 200 is also applicable to the electronic device 1000 of the embodiment of the present invention, and details are not described herein again.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be performed by software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if executed in hardware, as in another embodiment, it can be performed by any one of the following techniques or combinations thereof known in the art: having logic gates for performing logic functions on data signals Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be executed in the form of hardware or in the form of software functional modules.
  • the integrated modules, if executed in the form of software functional modules and sold or used as separate products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Color Television Image Signal Generators (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

本发明公开了一种电子装置(1000)的控制方法。控制方法包括步骤:(S211)控制图像传感器(10)输出合并图像;(S212)将合并图像划分成阵列排布的分析区域;(S213)计算每个分析区域的相位差;(S214)将相位差符合预定条件对应的分析区域归并为对焦区域;(S215)识别对焦区域是否存在人脸;和(S216)当存在人脸时,将合并图像转换成合并真彩图像。本发明还公开了一种控制装置(200)和一种电子装置(1000)。

Description

控制方法、控制装置及电子装置
优先权信息
本申请请求2016年11月29日向中国国家知识产权局提交的、专利申请号为201611079892.4的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本发明涉及成像技术,特别涉及一种控制方法、控制装置及电子装置。
背景技术
现有的一种图像传感器包括感光像素单元阵列和设置在感光像素单元阵列上的滤光片单元阵列,每个滤光片单元阵列覆盖对应一个感光像素单元,每个感光像素单元包括多个感光像素。工作时,可以控制图像传感器曝光输出合并图像,合并图像可以通过图像处理方法转化成合并真彩图像并保存下来。合并图像包括合并像素阵列,同一感光像素单元的多个感光像素合并输出作为对应的合并像素。如此,可以提高合并图像的信噪比,然而,合并图像的解析度降低。当然,也可以控制图像传感器曝光输出高像素的色块图像,色块图像包括原始像素阵列,每个感光像素对应一个原始像素。然而,由于同一滤光片单元对应的多个原始像素颜色相同,同样无法提高色块图像的解析度。因此,需要通过插值计算的方式将高像素色块图像转化成高像素的仿原图像,仿原图像可以包括呈贝尔阵列排布的仿原像素。仿原图像可以通过图像处理方法转化成仿原真彩图像并保存下来。然而,插值计算耗费资源且耗时,而且并非所有场景都适用或者需要输出仿原真彩图像。
发明内容
本发明的实施例提供一种控制方法、控制装置及电子装置。
本发明提供一种控制方法,用于控制电子装置,所述电子装置包括成像装置和显示器,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制方法包括以下步骤:
控制所述图像传感器输出合并图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素;
将所述合并图像划分成阵列排布的分析区域;
计算每个所述分析区域的相位差;
将所述相位差符合预定条件对应的所述分析区域归并为对焦区域;
识别所述对焦区域是否存在人脸;和
当存在人脸时,将所述合并图像转化成合并真彩图像。
本发明提供一种控制装置,用于控制电子装置,所述电子装置包括成像装置和显示器,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制装置包括:
第一控制模块,所述第一控制模块用于控制所述图像传感器输出合并图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素;
划分模块,所述划分模块用于将所述合并图像划分成阵列排布的分析区域;
计算模块,所述计算模块用于计算每个所述分析区域的相位差;
归并模块,所述归并模块用于将所述相位差符合预定条件对应的所述分析区域归并为对焦区域;
识别模块,所述识别模块用于识别所述对焦区域是否存在人脸;和
第一转化模块,所述第一转化模块用于当存在人脸时,将所述合并图像转化成合并真彩图像。
本发明提供一种电子装置,包括成像装置、显示器和所述控制装置。
本发明提供一种电子装置,包括:
成像装置,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素;
显示器;和
处理器,所述处理器用于:
控制所述图像传感器输出合并图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素;
将所述合并图像划分成阵列排布的分析区域;
计算每个所述分析区域的相位差;
将所述相位差符合预定条件对应的所述分析区域归并为对焦区域;
识别所述对焦区域是否存在人脸;和
当存在人脸时,将所述合并图像转化成合并真彩图像。
本发明实施方式的控制方法、控制装置及电子装置通过对景深内图像的识别和判断, 控制图像传感器输出合适的图像,避免图像传感器固定输出高质量图像带来的大量工作,从而减少电子装置工作时长,提高工作功率,提高用户满意度。
本发明的实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实施方式的实践了解到。
附图说明
本发明的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本发明实施方式的控制方法的流程示意图;
图2是本发明实施方式的控制装置的模块示意图;
图3是本发明实施方式的电子装置的模块示意图;
图4是本发明实施方式的控制方法的另一个流程示意图;
图5是本发明实施方式的控制装置的另一个模块示意图;
图6是本发明实施方式的控制方法的再一个流程示意图;
图7是本发明实施方式的控制方法的又一个流程示意图;
图8是本发明实施方式的控制装置的再一个模块示意图;
图9是本发明实施方式的第三转化模块的模块示意图;
图10是本发明实施方式的图像传感器的模块示意图;
图11是本发明实施方式的图像传感器的电路示意图;
图12是本发明实施方式的滤光片单元的示意图;
图13是本发明实施方式的图像传感器的结构示意图;
图14是本发明实施方式的合并图像状态示意图;
图15是本发明实施方式的色块图像的状态示意图;
图16是本发明实施方式的控制方法的状态示意图;
图17是本发明实施方式的控制方法的又一个流程示意图;
图18是本发明实施方式的第二计算单元的模块示意图;
图19是本发明实施方式的控制方法的又一个流程示意图;
图20是本发明实施方式的第三转化模块的另一个模块示意图;
图21是本发明实施方式的色块图像的图像像素单元示意图;
图22是本发明实施方式的控制方法的又一个流程示意图;
图23是本发明实施方式的第三转化模块的再一个模块示意图;
图24是本发明实施方式的控制方法的又一个流程示意图;
图25是本发明实施方式的第一转化模块的模块示意图;
图26是本发明实施方式的电子装置的另一个模块示意图。
主要元件符号附图说明:
电子装置1000、成像装置100、图像传感器10、感光像素单元阵列12、感光像素单元12a、感光像素子单元120、感光像素122、感光器件1222、传输管1224、源极跟随器124、模数转换器126、加法器128、滤光片单元阵列14、滤光片单元14a、控制装置200、第一控制模块211、划分模块212、计算模块213、归并模块214、识别模块215、第一转化模块216、判断模块217、第二转化模块218、第二控制模块219、第三转化模块220、第四转化模块230、判断单元221、第一计算单元222、第二计算单元223、第一计算子单元2232、第二计算子单元2234、第三计算子单元2236、白平衡补偿单元224、白平衡补偿还原单元225、坏点补偿单元226、串扰补偿单元227、处理单元228、第一转化单元2162、第二转化单元2164、电路板500、处理器600、存储器700、电源电路800。
具体实施方式
下面详细描述本发明的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。
在本发明的描述中,需要理解的是,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个所述特征。在本发明的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
在本发明的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接或可以相互通信;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明中的具体含义。
下文的公开提供了许多不同的实施方式或例子用来实现本发明的不同结构。为了简化本发明的公开,下文中对特定例子的部件和设置进行描述。当然,它们仅仅为示例,并且目的不在于限制本发明。此外,本发明可以在不同例子中重复参考数字和/或参考字母,这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施方式和/或设置之间的关系。此外,本发明提供了的各种特定的工艺和材料的例子,但是本领域普通技术人员可以意识到其他工艺的应用和/或其他材料的使用。
下面详细描述本发明的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。
请参阅图1,本发明实施方式的控制方法,用于控制电子装置1000,电子装置1000包括成像装置100和显示器,成像装置100包括图像传感器10,图像传感器10包括感光像素单元阵列12和设置在感光像素单元阵列12上的滤光片单元阵列14,每个滤光片单元阵列14覆盖对应一个感光像素单元12a,每个感光像素单元12a包括多个感光像素122,控制方法包括以下步骤:
S211:控制图像传感器输出合并图像,合并图像包括合并像素阵列,同一感光像素单元的多个感光像素合并输出作为一个合并像素;
S212:将合并图像划分成阵列排布的分析区域;
S213:计算每个分析区域的相位差;
S214:将相位差符合预定条件对应的分析区域归并为对焦区域;
S215:识别对焦区域是否存在人脸;和
S216:当存在人脸时,将合并图像转化成合并真彩图像。
请参阅图2,本发明实施方式的控制装置200,用于控制电子装置1000,电子装置1000包括成像装置100和显示器,成像装置100包括图像传感器10,图像传感器10包括感光像素单元阵列12和设置在感光像素单元阵列12上的滤光片单元阵列14,每个滤光片单元阵列14覆盖对应一个感光像素单元12a,每个感光像素单元12a包括多个感光像素122,控制装置200包括第一控制模块211、划分模块212、计算模块213、归并模块214、识别模块215和第一转化模块216。第一控制模块211用于控制图像传感器10输出合并图像,合并图像包括合并像素阵列,同一感光像素单元12a的多个感光像素122合并输出作为一个合并像素。划分模块212用于将合并图像划分成阵列排布的分析区域。计算模块213用于计算每个分析区域的相位差。归并模块214用于将相位差符合预定条件对应的分析区域归并为对焦区域。识别模块215用于识别对焦区域是否存在人脸。第一转化模块216用于当存在人脸时,将合并图像转化成合并真彩图像。
也即是说,步骤S211可以由第一控制模块211实现,步骤S212可以由划分模块212实现,步骤S213可以由计算模块213实现,步骤S214可以由归并模块214实现,步骤S215可以由识别模块215实现,步骤S216可以由第一转化模块216实现。
请参阅图3,本发明实施方式的电子装置1000包括成像装置100、显示器和本发明实施方式的控制装置200,也即是说,本发明实施方式的控制装置200可以应用于本发明实施方式的电子装置1000。
本发明的控制方法、控制装置200及电子装置1000通过对景深内图像的识别和判断,控制图像传感器10输出合适的图像,避免图像传感器10固定输出高质量图像带来的大量工作,从而减少电子装置1000工作时长,提高工作功率,提高用户满意度。
在某些实施方式中,电子装置1000包括手机或平板电脑等具备成像装置的电子设备,在此不做任何限制。本发明实施方式的电子装置1000是手机。
在某些实施方式中,成像装置100包括前置相机或后置相机。
在某些实施方式中,步骤S212可以是将合并图像划分成M*N个分析区域,如此,可以通过步骤S213计算每个分析区域的相位差,从而根据对相位差的判断获取对焦区域,即处于景深范围内的图像。处于景深范围内的图像是清晰的图像,对于是否做图像处理具备较高的参考价值。处于景深外的图像是模糊的图像,对于是否做图像处理的参考价值不高。因此,对景深范围内的图像进行分析从而确定是否对图像进行处理,如此,可以在获得高质量图像的同时降低工作量。
在某些实施方式中,用户在自拍或者拍摄人像时,并不需要太高的图像质量,因此,当检测到对焦区域中是人脸时,控制装置200控制图像传感器10输出合并图像,并将合并图像转化成合并真彩图像。
请参阅图4,在某些实施方式中,控制方法包括以下步骤:
S217:当不存在人脸时,判断对焦区域的亮度是否小于等于亮度阈值、绿色占比是否小于等于占比阈值和空间频率是否小于等于频率阈值;
S218:在对焦区域的亮度小于等于亮度阈值、绿色占比小于等于占比阈值和空间频率小于等于频率阈值时,将合并图像转化成合并真彩图像。
请参阅图5,在某些实施方式中,控制装置200包括判断模块217和第二转化模块218。判断模块217用于当不存在人脸时,判断对焦区域的亮度是否小于等于亮度阈值、绿色占比是否小于等于占比阈值和空间频率是否小于等于频率阈值。第二转化模块218用于在对焦区域的亮度小于等于亮度阈值、绿色占比小于等于占比阈值和空间频率小于等于频率阈值时,将合并图像转化成合并真彩图像。
在某些实施方式中,风景图像的特征点就在于亮度、绿色占比和空间频率,因此,通过上述各个特征点和对应阈值比较,可以判断图像是否为风景图像,由于风景图像需要较高的图像质量并且环境亮度都比较高,因此在没有检测到风景图像时,控制装置200控制图像传感器10输出合并图像,并将合并图像转化成合并真彩图像。在检测到风景图像时,控制装置200控制图像传感器10输出色块图像,并先将色块图像转化成仿原图像,再将仿原图像转化成仿原真彩图像。
在某些实施方式中,亮度阈值、占比阈值以及频率阈值可以由用户根据不同的环境进 行不同的设置,如此,用户可以根据环境的差异对某个或多个阈值进行设置,从而达到较为理想的拍摄效果。
在某些实施方式中,亮度阈值、占比阈值以及频率阈值也可以是电子装置1000的存储器存储的多个不同阈值,以提供用户选择,在此不做任何限制。
请参阅图6,在某些实施方式中,控制方法包括以下步骤:
S219:在对焦区域的亮度大于亮度阈值、绿色占比大于占比阈值和空间频率大于频率阈值时,控制图像传感器输出色块图像,色块图像包括预定阵列排布的图像像素单元,图像像素单元包括多个原始像素,每个感光像素对应一个原始像素;
S220:将色块图像转化成仿原图像,仿原图像包括阵列排布的仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素;和
S230:将仿原图像转化成仿原真彩图像。
请参阅图7,在某些实施方式中,步骤S220包括以下步骤:
S221:判断当前像素的颜色与关联像素的颜色是否相同;
S222:在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;和
S223:在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻。
请参阅图8和图9,控制装置200包括第二控制模块219、第三转化模块220、第四转化模块230。第二控制模块219用于在对焦区域的亮度大于亮度阈值、绿色占比大于占比阈值和空间频率大于频率阈值时,控制图像传感器10输出色块图像,色块图像包括预定阵列排布的图像像素单元,图像像素单元包括多个原始像素,每个感光像素对应一个原始像素。第三转化模块220用于将色块图像转化成仿原图像,仿原图像包括阵列排布的仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素。第三转化模块220包括判断单元221、第一计算单元222和第二计算单元223。判断单元221用于判断当前像素的颜色与关联像素的颜色是否相同。第一计算单元222用于在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值。第二计算单元223用于在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻。第四转化模块230用于将仿原图像转化成仿原真彩图像。
可以理解,在图像的几个特征点都满足预定条件的情况下,控制装置200判断所述图像为风景图像,由于风景图像对图像质量的需求较高,所以控制装置200控制图像传感器 10输出色块图像,并将色块图像转化成仿原图像,再将仿原图像转化成仿原真彩图像,从而用户可以获得高质量的风景图像。
请一并参阅图10至图13,本发明实施方式的图像传感器10包括感光像素单元阵列12和设置在感光像素单元阵列12上的滤光片单元阵列14。
进一步地,感光像素单元阵列12包括多个感光像素单元12a,每个感光像素单元12a包括多个相邻的感光像素122。每个感光像素122包括一个感光器件1222和一个传输管1224,其中,感光器件1222可以是光电二极管,传输管1224可以是MOS晶体管。
滤光片单元阵列14包括多个滤光片单元14a,每个滤光片单元14a覆盖对应一个感光像素单元12a。
具体地,在某些示例中,滤光片单元阵列14包括拜耳阵列,也即是说,相邻的四个滤光片单元14a分别为一个红色滤光片单元、一个蓝色滤光片单元和两个绿色滤光片单元。
每一个感光像素单元12a对应同一颜色的滤光片单元14a,若一个感光像素单元12a中一共包括n个相邻的感光器件1222,那么一个滤光片单元14a覆盖一个感光像素单元12a中的n个感光器件1222,该滤光片单元14a可以是一体构造,也可以由n个独立的子滤光片组装连接在一起。
在某些实施方式中,每个感光像素单元12a包括四个相邻的感光像素122,相邻两个感光像素122共同构成一个感光像素子单元120,感光像素子单元120还包括一个源极跟随器124及一个模数转换器126。感光像素单元12a还包括一个加法器128。其中,一个感光像素子单元120中的每个传输管1224的一端电极被连接到对应感光器件1222的阴极电极,每个传输管1224的另一端被共同连接至源极跟随器124的闸极电极,并通过源极跟随器124源极电极连接至一个模数转换器126。其中,源极跟随器124可以是MOS晶体管。两个感光像素子单元120通过各自的源极跟随器124及模数转换器126连接至加法器128。
也即是说,本发明实施方式的图像传感器10的一个感光像素单元12a中相邻的四个感光器件1222共用一个同颜色的滤光片单元14a,每个感光器件1222对应连接一个传输管1224,相邻两个感光器件1222共用一个源极跟随器124和一个模数转换器126,相邻的四个感光器件1222共用一个加法器128。
进一步地,相邻的四个感光器件1222呈2*2阵列排布。其中,一个感光像素子单元120中的两个感光器件1222可以处于同一列。
在成像时,当同一滤光片单元14a下覆盖的两个感光像素子单元120或者说四个感光器件1222同时曝光时,可以对像素进行合并进而可输出合并图像。
具体地,感光器件1222用于将光照转化为电荷,且产生的电荷与光照强度成比例关系,传输管1224用于根据控制信号来控制电路的导通或断开。当电路导通时,源极跟随器124 用于将感光器件1222经光照产生的电荷信号转化为电压信号。模数转换器126用于电压信号转换为数字信号。加法器128用于将两路数字信号相加共同输出。
请参阅图14,以16M的图像传感器10举例来说,本发明实施方式的图像传感器10可以将16M的感光像素122合并成4M,或者说,输出合并图像,合并后,感光像素122的大小相当于变成了原来大小的4倍,从而提升了感光像素122的感光度。此外,由于图像传感器10中的噪声大部分都是随机噪声,对于合并之前的感光像素122的来说,有可能其中一个或两个像素中存在噪点,而在将四个感光像素122合并成一个大的感光像素122后,减小了噪点对该大像素的影响,也即是减弱了噪声,提高了信噪比。
但在感光像素122大小变大的同时,由于像素值降低,合并图像的解析度也将降低。
在成像时,当同一滤光片单元14a下覆盖的四个感光器件1222依次曝光时,可以输出色块图像。
具体地,感光器件1222用于将光照转化为电荷,且产生的电荷与光照强度成比例关系,传输管1224用于根据控制信号来控制电路的导通或断开。当电路导通时,源极跟随器124用于将感光器件1222经光照产生的电荷信号转化为电压信号。模数转换器126用于电压信号转换为数字信号。
请参阅图15,以16M的图像传感器10举例来说,本发明实施方式的图像传感器200还可以保持16M的感光像素122输出,或者说输出色块图像,色块图像包括图像像素单元,图像像素单元包括2*2阵列排布的原始像素,该原始像素的大小与感光像素122大小相同,然而由于覆盖相邻四个感光器件1222的滤光片单元14a为同一颜色,也即时说,虽然四个感光器件1222分别曝光,但覆盖其的滤光片单元14a颜色相同,因此,输出的每个图像像素单元的相邻四个原始像素颜色相同,仍然无法提高图像的解析度。
本发明实施方式的控制方法,可以用于对输出的色块图像进行处理,以得到仿原图像。
可以理解,合并图像在输出时,四个相邻的同色感光像素122以合并像素输出,如此,合并图像中的四个相邻的合并像素仍可看作是典型的拜耳阵列,可以直接被控制装置200处理以输出合并真彩图像。而色块图像在输出时每个感光像素122分别输出,由于相邻四个感光像素122颜色相同,因此,一个图像像素单元的四个相邻原始像素的颜色相同,是非典型的拜耳阵列。而控制装置200无法对非典型拜耳阵列直接进行处理,也即是说,在图像传感器10采用同一控制装置200时,为兼容两种模式的真彩图像输出即合并模式下的合并真彩图像输出及色块模式下的仿原真彩图像输出,需将色块图像转化为仿原图像,或者说将非典型拜耳阵列的图像像素单元转化为典型拜耳阵列的像素排布。
仿原图像包括呈拜耳阵列排布的仿原像素。仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素。
请参阅图16,以图16为例,当前像素为R3’3’和R5’5’,对应的关联像素分别为R33和B55。
在获取当前像素R3’3’时,由于R3’3’与对应的关联像素R33的颜色相同,因此在转化时直接将R33的像素值作为R3’3’的像素值。
在获取当前像素R5’5’时,由于R5’5’与对应的关联像素B55的颜色不相同,显然不能直接将B55的像素值作为R5’5’的像素值,需要根据R5’5’的关联像素单元通过插值的方式计算得到。
需要说明的是,以上及下文中的像素值应当广义理解为该像素的颜色属性数值,例如色彩值。
关联像素单元包括多个,例如4个,颜色与当前像素相同且与当前像素相邻的图像像素单元中的原始像素。
需要说明的是,此处相邻应做广义理解,以图16为例,R5’5’对应的关联像素为B55,与B55所在的图像像素单元相邻的且与R5’5’颜色相同的关联像素单元所在的图像像素单元分别为R44、R74、R47、R77所在的图像像素单元,而并非在空间上距离B55所在的图像像素单元更远的其他的红色图像像素单元。其中,与B55在空间上距离最近的红色原始像素分别为R44、R74、R47和R77,也即是说,R5’5’的关联像素单元由R44、R74、R47和R77组成,R5’5’与R44、R74、R47和R77的颜色相同且相邻。
如此,针对不同情况的当前像素,采用不同方式的将原始像素转化为仿原像素,从而将色块图像转化为仿原图像,由于拍摄图像时,采用了特殊的拜耳阵列结构的滤光片,提高了图像信噪比,并且在图像处理过程中,通过插值算法对色块图像进行插值处理,提高了图像的分辨率及解析度。
请参阅图17,在某些实施方式中,步骤S223包括以下步骤:
S2232:计算关联像素单元各个方向上的渐变量;
S2234:计算关联像素单元各个方向上的权重;和
S2236:根据渐变量及权重计算当前像素的像素值。
请参阅图18,在某些实施方式中,第二计算单元223包括第一计算子单元2232、第二计算子单元2234和第三计算子单元2236。步骤S2232可以由第一计算子单元2232实现,步骤S2234可以由第二计算子单元2234实现,步骤S2236可以由第三计算子单元2236实现。或者说,第一计算子单元2232用于计算关联像素单元各个方向上的渐变量,第二计算子单元2234用于计算关联像素单元各个方向上的权重,第三计算子单元2236用于根据渐变量及权重计算当前像素的像素值。
具体地,插值处理方式是参考图像在不同方向上的能量渐变,将与当前像素对应的颜 色相同且相邻的关联像素单元依据在不同方向上的渐变权重大小,通过线性插值的方式计算得到当前像素的像素值。其中,在能量变化量较小的方向上,参考比重较大,因此,在插值计算时的权重较大。
在某些示例中,为方便计算,仅考虑水平和垂直方向。
R5’5’由R44、R74、R47和R77插值得到,而在水平和垂直方向上并不存在颜色相同的原始像素,因此需首根据关联像素单元计算在水平和垂直方向上该颜色的分量。其中,水平方向上的分量为R45和R75、垂直方向的分量为R54和R57可以分别通过R44、R74、R47和R77计算得到。
具体地,R45=R44*2/3+R47*1/3,R75=2/3*R74+1/3*R77,R54=2/3*R44+1/3*R74,R57=2/3*R47+1/3*R77。
然后,分别计算在水平和垂直方向的渐变量及权重,也即是说,根据该颜色在不同方向的渐变量,以确定在插值时不同方向的参考权重,在渐变量小的方向,权重较大,而在渐变量较大的方向,权重较小。其中,在水平方向的渐变量X1=|R45-R75|,在垂直方向上的渐变量X2=|R54-R57|,W1=X1/(X1+X2),W2=X2/(X1+X2)。
如此,根据上述可计算得到,R5’5’=(2/3*R45+1/3*R75)*W2+(2/3*R54+1/3*R57)*W1。可以理解,若X1大于X2,则W1大于W2,因此计算时水平方向的权重为W2,而垂直方向的权重为W1,反之亦反。
如此,可根据插值算法计算得到当前像素的像素值。依据上述对关联像素的处理方式,可将原始像素转化为呈典型拜耳阵列排布的仿原像素,也即是说,相邻的四个2*2阵列的仿原像素包括一个红色仿原像素,两个绿色仿原像素和一个蓝色仿原像素。
需要说明的是,插值的方式包括但不限于本实施例中公开的在计算时仅考虑垂直和水平两个方向相同颜色的像素值的方式,例如还可以参考其他颜色的像素值。
请参阅图19和图20,在某些实施方式中,步骤S223前包括步骤:
S224:对色块图像做白平衡补偿;
步骤S223后包括步骤:
S225:对仿原图像做白平衡补偿还原。
在某些实施方式中,第三转化模块220包括白平衡补偿单元224和白平衡补偿还原单元225。步骤S224可以由白平衡补偿单元224实现,步骤S225可以由白平衡补偿还原单元225实现。或者说,白平衡补偿单元224用于对色块图像做白平衡补偿,白平衡补偿还原单元225用于对仿原图像做白平衡补偿还原。
具体地,在一些示例中,在将色块图像转化为仿原图像的过程中,在插值过程中,红色和蓝色仿原像素往往不仅参考与其颜色相同的通道的原始像素的颜色,还会参考绿色通 道的原始像素的颜色权重,因此,在插值前需要进行白平衡补偿,以在插值计算中排除白平衡的影响。为了不破坏色块图像的白平衡,因此,在插值之后需要将仿原图像进行白平衡补偿还原,还原时根据在补偿中红色、绿色及蓝色的增益值进行还原。
如此,可排除在插值过程中白平衡的影响,并且能够使得插值后得到的仿原图像保持色块图像的白平衡。
请再次参阅图19和图20,在某些实施方式中,步骤S223前包括步骤:
S226:对色块图像做坏点补偿。
在某些实施方式中,第三转化模块220包括坏点补偿单元226。步骤S226可以由坏点补偿单元226实现。或者说,坏点补偿单元226用于对色块图像做坏点补偿。
可以理解,受限于制造工艺,图像传感器10可能会存在坏点,坏点通常不随感光度变化而始终呈现同一颜色,坏点的存在将影响图像质量,因此,为保证插值的准确,不受坏点的影响,需要在插值前进行坏点补偿。
具体地,坏点补偿过程中,可以对原始像素进行检测,当检测到某一原始像素为坏点时,可根据其所在的图像像素单元的其他原始像的像素值进行坏点补偿。
如此,可排除坏点对插值处理的影响,提高图像质量。
请再次参阅图19和图20,在某些实施方式中,步骤S223前包括步骤:
S227:对色块图像做串扰补偿。
在某些实施方式中,第三转化模块220包括串扰补偿单元227。步骤S227可以由串扰补偿单元227实现。或者说,串扰补偿单元227用于对色块图像做串扰补偿。
具体的,一个感光像素单元12a中的四个感光像素122覆盖同一颜色的滤光片,而感光像素122之间可能存在感光度的差异,以至于以仿原图像转化输出的仿原真彩图像中的纯色区域会出现固定型谱噪声,影响图像的质量。因此,需要对色块图像进行串扰补偿。
在某些实施方式中,设定补偿参数包括以下步骤:提供预定光环境;设置成像装置的成像参数;拍摄多帧图像;处理多帧图像以获得串扰补偿参数;和将串扰补偿参数保存在所述图像处理装置内。
如上述解释说明,进行串扰补偿,需要在成像装置100的图像传感器10制造过程中获得补偿参数,并将串扰补偿的相关参数预置于成像装置100的存储器中或装设成像装置100的电子装置1000例如手机或平板电脑中。
预定光环境例如可包括LED匀光板,5000K左右的色温,亮度1000勒克斯左右,成像参数可包括增益值,快门值及镜头位置。设定好相关参数后,进行串扰补偿参数的获取。
处理过程中,首先在设定的光环境中以设置好的成像参数,获取多张色块图像,并合并成一张色块图像,如此可减少以单张色块图像作为校准基础的噪声影响。
请参阅图21,以图21中的图像像素单元Gr为例,其包括Gr1、Gr2、Gr3和Gr4,串扰补偿目的在于将感光度可能存在差异的感光像素通过补偿进本校准至同一水平。该图像像素单元的平均像素值为Gr_avg=(Gr1+Gr2+Gr3+Gr4)/4,可基本表征这四个感光像素的感光度的平均水平,以此平均值作为基础值,分别计算Gr1/Gr_avg,Gr2/Gr_avg,Gr3/Gr_avg和Gr4/Gr_avg,可以理解,通过计算每一个原始像素的像素值与该图像像素单元的平均像素值的比值,可以基本反映每个原始像素与基础值的偏差,记录四个比值并作为补偿参数记录到相关装置的存储器中,以在成像时进行调取对每个原始像素进行补偿,从而减少串扰,提高图像质量。
通常,在设定串扰补偿参数后还应当验证所设定的参数是否准确。
验证过程中,首先以相同的光环境和成像参数获取一张色块图像,依据计算得到的补偿参数对该色块图像进行串扰补偿,计算补偿后的Gr’_avg、Gr’1/Gr’_avg、Gr’2/Gr’_avg、Gr’3/Gr’_avg和Gr’4/Gr’_avg。根据计算结果判断补偿参数是否准确,判断可根据宏观与微观两个角度考虑。微观是指某一个原始像素在补偿后仍然偏差较大,成像后易被使用者感知,而宏观则从全局角度,也即是在补偿后仍存在偏差的原始像素的总数目较多时,此时即便单独的每一个原始像素的偏差不大,但作为整体仍然会被使用者感知。因此,应当设置针对微观设置一个比例阈值即可,针对宏观需设置一个比例阈值和一个数量阈值。如此,可对设置的串扰补偿参数进行验证,确保补偿参数的正确,以减少串扰对图像质量的影响。
请参阅图22和图23,在某些实施方式中,步骤S223后还包括步骤:
S228:对仿原图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
在某些实施方式中,第三转化模块220包括处理单元228。步骤S228可以由处理单元228实现,或者说,处理单元228用于对仿原图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
可以理解,将色块图像转化为仿原图像后,仿原像素排布为典型的拜耳阵列,可采用处理装置200进行处理,处理过程中包括镜片阴影校正、去马赛克、降噪和边缘锐化处理,如此,处理后即可得到仿原真彩图像输出给用户。
请参阅图24和图25,在某些实施方式中,步骤S216包括以下步骤:
S2162:利用第二插值算法将合并图像转化成与仿原图像对应的还原图像,第二插值算法的复杂度小于第一插值算法;和
S2164:将还原图像转化成合并真彩图像。
在某些实施方式中,第一转化模块216包括第一转化单元2162和第二转化单元2164。第一转化单元2162用于利用第二插值算法将合并图像转化成与仿原图像对应的还原图像,第二插值算法的复杂度小于第一插值算法。第二转化单元2164用于将还原图像转化成合并 真彩图像。也即是说,步骤S2162可以由第一转化单元2162实现,步骤S2164可以由第二转化单元2164实现。
在某些实施方式中,第二插值算法的时间复杂度和空间复杂度都比第一插值算法小。算法的复杂度包括时间复杂度和空间复杂度,时间复杂度用来度量算法需要耗费的时间,空间复杂度是用来度量算法需要耗费的存储空间。时间复杂度小说明算法需要耗费的时间少,空间复杂度小说明算法需要耗费的存储空间小,因此,利用第二插值算法有利于提高运算速度,使得拍照过程更加流畅,提高用户体验。
在某些实施方式中,第二插值算法可以是简单地将合并图像放大四倍而不需要经过其他的复杂运算,从而得到与仿原图像对应的还原图像。
可以理解,通过第一转化单元2162和第二转化单元2164转化得到仿原真彩图像后,可以对仿原真彩图像进行降噪和边缘锐化处理,如此,处理后即可得到质量较佳的仿原真彩图像输出给用户。
请参阅图26,本发明实施方式的电子装置1000还包括电路板500、处理器600、存储器700和电源电路800。其中,电路板500安置在电子装置1000的空间内部,处理器600和存储器700设置在电路板500上,电源电路800用于为电子装置1000的各个电路或器件供电。
存储器700用于存储可执行程序代码。处理器600通过读取存储器700中存储的可执行程序代码来运行与可执行程序代码对应的程序,以用于执行上述实施方式的控制方法。处理器600用于执行以下步骤:
控制图像传感器输出合并图像,合并图像包括合并像素阵列,同一感光像素单元的多个感光像素合并输出作为一个合并像素;
将合并图像划分成阵列排布的分析区域;
计算每个分析区域的相位差;
将相位差符合预定条件对应的分析区域归并为对焦区域;
识别对焦区域是否存在人脸;和
当存在人脸时,将合并图像转化成合并真彩图像。
需要说明的是,前述对控制方法和控制装置200的解释说明也适用于本发明实施方式的电子装置1000,在此不再赘述。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特 征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于执行特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的执行,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于执行逻辑功能的可执行指令的定序列表,可以具体执行在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来执行。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来执行。例如,如果用硬件来执行,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来执行:具有用于对数据信号执行逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解执行上述实施方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式执行,也可以采用软件功能模块的形式执行。所述集成的模块如果以软件功能模块的形式执行并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (36)

  1. 一种控制方法,用于控制电子装置,其特征在于,所述电子装置包括成像装置和显示器,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制方法包括以下步骤:
    控制所述图像传感器输出合并图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素;
    将所述合并图像划分成阵列排布的分析区域;
    计算每个所述分析区域的相位差;
    将所述相位差符合预定条件对应的所述分析区域归并为对焦区域;
    识别所述对焦区域是否存在人脸;和
    当存在人脸时,将所述合并图像转化成合并真彩图像。
  2. 如权利要求1所述的控制方法,其特征在于,所述控制方法包括以下步骤:
    当不存在人脸时,判断所述对焦区域的亮度是否小于等于亮度阈值、绿色占比是否小于等于占比阈值和空间频率是否小于等于频率阈值;
    在所述对焦区域的亮度小于等于所述亮度阈值、绿色占比小于等于所述占比阈值和空间频率小于等于所述频率阈值时,将所述合并图像转化成所述合并真彩图像。
  3. 如权利要求2所述的控制方法,其特征在于,所述控制方法包括以下步骤:
    在所述对焦区域的亮度大于所述亮度阈值、绿色占比大于所述占比阈值和空间频率大于所述频率阈值时,控制所述图像传感器输出色块图像,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素对应一个所述原始像素;
    将所述色块图像转化成仿原图像,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述将所述色块图像转化成所述仿原图像的步骤包括以下步骤:
    判断所述当前像素的颜色与所述关联像素的颜色是否相同;
    在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;和
    在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
    将所述仿原图像转化成仿原真彩图像。
  4. 如权利要求3所述的控制方法,其特征在于,所述预定阵列包括贝尔阵列。
  5. 如权利要求3所述的控制方法,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
  6. 如权利要求3所述的控制方法,其特征在于,所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤包括以下步骤:
    计算所述关联像素各个方向上的渐变量;
    计算所述关联像素各个方向上的权重;和
    根据所述渐变量及所述权重计算所述当前像素的像素值。
  7. 如权利要求3所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前包括以下步骤:
    对所述色块图像做白平衡补偿;
    所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤后包括以下步骤:
    对所述仿原图像做白平衡补偿还原。
  8. 如权利要求3所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前包括以下步骤:
    对所述色块图像做坏点补偿。
  9. 如权利要求3所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前包括以下步骤:
    对所述色块图像做串扰补偿。
  10. 如权利要求3所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤后包括如下步骤:
    对所述仿原图像进行镜片形状校正、去马赛克、降噪和边缘锐化处理。
  11. 如权利要求3所述的控制方法,其特征在于,所述将所述合并图像转化成合并真彩图像的步骤包括以下步骤:
    利用第二插值算法将所述合并图像转化成与所述仿原图像对应的还原图像,所述第二插值算法的复杂度小于所述第一插值算法;和
    将所述还原图像转化成所述合并真彩图像。
  12. 一种控制装置,用于控制电子装置,其特征在于,所述电子装置包括成像装置和显示器,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制装置包括:
    第一控制模块,所述第一控制模块用于控制所述图像传感器输出合并图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素;
    划分模块,所述划分模块用于将所述合并图像划分成阵列排布的分析区域;
    计算模块,所述计算模块用于计算每个所述分析区域的相位差;
    归并模块,所述归并模块用于将所述相位差符合预定条件对应的所述分析区域归并为对焦区域;
    识别模块,所述识别模块用于识别所述对焦区域是否存在人脸;和
    第一转化模块,所述第一转化模块用于当存在人脸时,将所述合并图像转化成合并真彩图像。
  13. 如权利要求12所述的控制装置,其特征在于,所述控制装置包括:
    判断模块,所述判断模块用于当不存在人脸时,判断所述对焦区域的亮度是否小于等于亮度阈值、绿色占比是否小于等于占比阈值和空间频率是否小于等于频率阈值;
    第二转化模块,所述第二转化模块用于在所述对焦区域的亮度小于等于所述亮度阈值、绿色占比小于等于所述占比阈值和空间频率小于等于所述频率阈值时,将所述合并图像转化成所述合并真彩图像。
  14. 如权利要求13所述的控制装置,其特征在于,所述控制装置包括:
    第二控制模块,所述第二控制模块用于在所述对焦区域的亮度大于所述亮度阈值、绿色占比大于所述占比阈值和空间频率大于所述频率阈值时,控制所述图像传感器输出色块图像,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素对应一个所述原始像素;
    第三转化模块,所述第三转化模块用于将所述色块图像转化成仿原图像,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述第三转化模块包括:
    判断单元,所述判断单元用于判断所述当前像素的颜色与所述关联像素的颜色是否相同;
    第一计算单元,所述第一计算单元用于在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;和
    第二计算单元,所述第二计算单元用于在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
    第四转化模块,所述第四转化模块用于将所述仿原图像转化成仿原真彩图像。
  15. 如权利要求14所述的控制装置,其特征在于,所述预定阵列包括贝尔阵列。
  16. 如权利要求14所述的控制装置,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
  17. 如权利要求14所述的控制装置,其特征在于,所述第二计算单元包括:
    第一计算子单元,所述第一计算子单元用于计算所述关联像素各个方向上的渐变量;
    第二计算子单元,所述第二计算子单元用于计算所述关联像素各个方向上的权重;和
    第三计算子单元,所述第三计算子单元用于根据所述渐变量及所述权重计算所述当前像素的像素值。
  18. 如权利要求14所述的控制装置,其特征在于,所述第三转化模块包括:
    白平衡补偿单元,所述白平衡补偿单元用于对所述色块图像做白平衡补偿;和
    白平衡补偿还原单元,所述白平衡补偿还原单元用于对所述仿原图像做白平衡补偿还原。
  19. 如权利要求14所述的控制装置,其特征在于,所述第三转化模块包括:
    坏点补偿单元,所述坏点补偿单元用于对所述色块图像做坏点补偿。
  20. 如权利要求14所述的控制装置,其特征在于,所述第三转化模块包括:
    串扰补偿单元,所述串扰补偿单元用于对所述色块图像做串扰补偿。
  21. 如权利要求14所述的控制装置,其特征在于,所述第三转化模块包括:
    处理单元,所述处理单元用于对所述仿原图像进行镜片形状校正、去马赛克、降噪和边缘锐化处理。
  22. 如权利要求14所述的控制装置,其特征在于,所述第一转化模块包括:
    第一转化单元,所述第一转化单元用于利用第二插值算法将所述合并图像转化成与所述仿原图像对应的还原图像,所述第二插值算法的复杂度小于所述第一插值算法;和
    第二转化单元,所述第二转化单元用于将所述还原图像转化成所述合并真彩图像。
  23. 一种电子装置,其特征在于包括:
    成像装置;
    显示器;和
    如权利要求12-22任意一项所述的控制装置。
  24. 如权利要求23所述的电子装置,其特征在于,所述电子装置包括手机或平板电脑。
  25. 如权利要求23所述的电子装置,其特征在于,所述成像装置包括前置相机或后置相机。
  26. 一种电子装置,其特征在于,包括:
    成像装置,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素;
    显示器;和
    处理器,所述处理器用于:
    控制所述图像传感器输出合并图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素;
    将所述合并图像划分成阵列排布的分析区域;
    计算每个所述分析区域的相位差;
    将所述相位差符合预定条件对应的所述分析区域归并为对焦区域;
    识别所述对焦区域是否存在人脸;和
    当存在人脸时,将所述合并图像转化成合并真彩图像。
  27. 如权利要求26所述的电子装置,其特征在于,所述处理器用于:
    当不存在人脸时,判断所述对焦区域的亮度是否小于等于亮度阈值、绿色占比是否小于等于占比阈值和空间频率是否小于等于频率阈值;
    在所述对焦区域的亮度小于等于所述亮度阈值、绿色占比小于等于所述占比阈值和空间频率小于等于所述频率阈值时,将所述合并图像转化成所述合并真彩图像。
  28. 如权利要求27所述的电子装置,其特征在于,所述处理器用于:
    在所述对焦区域的亮度大于所述亮度阈值、绿色占比大于所述占比阈值和空间频率大于所述频率阈值时,控制所述图像传感器输出色块图像,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素对应一个所述原始像素;
    将所述色块图像转化成仿原图像,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素;
    判断所述当前像素的颜色与所述关联像素的颜色是否相同;
    在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;
    在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
    将所述仿原图像转化成仿原真彩图像。
  29. 如权利要求28所述的电子装置,其特征在于,所述预定阵列包括贝尔阵列。
  30. 如权利要求28所述的电子装置,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
  31. 如权利要求28所述的电子装置,其特征在于,所述处理器用于:
    计算所述关联像素各个方向上的渐变量;
    计算所述关联像素各个方向上的权重;和
    根据所述渐变量及所述权重计算所述当前像素的像素值。
  32. 如权利要求28所述的电子装置,其特征在于,所述处理器用于:
    对所述色块图像做白平衡补偿;和
    对所述仿原图像做白平衡补偿还原。
  33. 如权利要求28所述的电子装置,其特征在于,所述处理器用于:
    对所述色块图像做坏点补偿。
  34. 如权利要求28所述的电子装置,其特征在于,所述处理器用于:
    对所述色块图像做串扰补偿。
  35. 如权利要求28所述的电子装置,其特征在于,所述处理器用于:
    对所述仿原图像进行镜片形状校正、去马赛克、降噪和边缘锐化处理。
  36. 如权利要求28所述的电子装置,其特征在于,所述处理器用于:
    利用第二插值算法将所述合并图像转化成与所述仿原图像对应的还原图像,所述第二插值算法的复杂度小于所述第一插值算法;和
    将所述还原图像转化成所述合并真彩图像。
PCT/CN2017/085212 2016-11-29 2017-05-19 控制方法、控制装置及电子装置 WO2018099008A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611079892.4A CN106454289B (zh) 2016-11-29 2016-11-29 控制方法、控制装置及电子装置
CN201611079892.4 2016-11-29

Publications (1)

Publication Number Publication Date
WO2018099008A1 true WO2018099008A1 (zh) 2018-06-07

Family

ID=58222540

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/085212 WO2018099008A1 (zh) 2016-11-29 2017-05-19 控制方法、控制装置及电子装置

Country Status (5)

Country Link
US (2) US10348962B2 (zh)
EP (1) EP3327781B1 (zh)
CN (1) CN106454289B (zh)
ES (1) ES2774493T3 (zh)
WO (1) WO2018099008A1 (zh)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106604001B (zh) * 2016-11-29 2018-06-29 广东欧珀移动通信有限公司 图像处理方法、图像处理装置、成像装置及电子装置
CN106341670B (zh) 2016-11-29 2017-09-22 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN106791477B (zh) * 2016-11-29 2019-07-19 Oppo广东移动通信有限公司 图像处理方法、图像处理装置、成像装置及制造方法
CN106454289B (zh) * 2016-11-29 2018-01-23 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN106507068B (zh) 2016-11-29 2018-05-04 广东欧珀移动通信有限公司 图像处理方法及装置、控制方法及装置、成像及电子装置
CN106507069B (zh) * 2016-11-29 2018-06-05 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN106507019B (zh) 2016-11-29 2019-05-10 Oppo广东移动通信有限公司 控制方法、控制装置、电子装置
CN106454288B (zh) 2016-11-29 2018-01-19 广东欧珀移动通信有限公司 控制方法、控制装置、成像装置及电子装置
CN106454054B (zh) 2016-11-29 2019-03-19 Oppo广东移动通信有限公司 控制方法、控制装置及电子装置
CN106504218B (zh) 2016-11-29 2019-03-12 Oppo广东移动通信有限公司 控制方法、控制装置及电子装置
US10872261B2 (en) * 2018-12-03 2020-12-22 Qualcomm Incorporated Dynamic binning of sensor pixels
CN110738628B (zh) * 2019-10-15 2023-09-05 湖北工业大学 一种基于wiml比较图的自适应焦点检测多聚焦图像融合方法
CN112866549B (zh) * 2019-11-12 2022-04-12 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN115086517A (zh) * 2022-05-26 2022-09-20 联宝(合肥)电子科技有限公司 一种图像采集方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1685739A (zh) * 2002-09-26 2005-10-19 精工爱普生株式会社 图象数据的输出图象调整
US20090200451A1 (en) * 2008-02-08 2009-08-13 Micron Technology, Inc. Color pixel arrays having common color filters for multiple adjacent pixels for use in cmos imagers
CN101998048A (zh) * 2009-08-05 2011-03-30 三星电子株式会社 数字图像信号处理方法和数字图像信号处理设备
CN103531603A (zh) * 2013-10-30 2014-01-22 上海集成电路研发中心有限公司 一种cmos图像传感器
CN103765876A (zh) * 2011-08-31 2014-04-30 索尼公司 图像处理设备以及图像处理方法和程序
CN106454289A (zh) * 2016-11-29 2017-02-22 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7088392B2 (en) * 2001-08-27 2006-08-08 Ramakrishna Kakarala Digital image system and method for implementing an adaptive demosaicing method
JP4046079B2 (ja) * 2003-12-10 2008-02-13 ソニー株式会社 画像処理装置
JP2005215750A (ja) * 2004-01-27 2005-08-11 Canon Inc 顔検知装置および顔検知方法
JP2007166456A (ja) * 2005-12-16 2007-06-28 Fuji Xerox Co Ltd 画像調整装置、画像調整方法及びプログラム
US7773138B2 (en) 2006-09-13 2010-08-10 Tower Semiconductor Ltd. Color pattern and pixel level binning for APS image sensor using 2×2 photodiode sharing scheme
JP5446076B2 (ja) * 2007-07-17 2014-03-19 株式会社ニコン デジタルカメラ
JP5113514B2 (ja) * 2007-12-27 2013-01-09 キヤノン株式会社 ホワイトバランス制御装置およびホワイトバランス制御方法
JP2010035048A (ja) * 2008-07-30 2010-02-12 Fujifilm Corp 撮像装置及び撮像方法
JP5075934B2 (ja) 2010-03-25 2012-11-21 株式会社東芝 固体撮像装置および画像記録装置
JP5825817B2 (ja) * 2011-04-01 2015-12-02 キヤノン株式会社 固体撮像素子及び撮像装置
JP5843486B2 (ja) 2011-05-31 2016-01-13 キヤノン株式会社 撮像装置およびその制御方法
JP2013021660A (ja) * 2011-07-14 2013-01-31 Sony Corp 画像処理装置、撮像装置、および画像処理方法、並びにプログラム
CN103782213B (zh) * 2011-09-22 2015-11-25 富士胶片株式会社 数字相机
AU2012374649A1 (en) 2012-03-27 2014-09-11 Sony Corporation Image processing device, image-capturing element, image processing method, and program
JP6545007B2 (ja) * 2015-06-11 2019-07-17 キヤノン株式会社 撮像装置
US10148863B2 (en) * 2015-12-08 2018-12-04 Canon Kabushiki Kaisha Information processing apparatus and information processing method
CN105611258A (zh) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 图像传感器的成像方法、成像装置和电子装置
US11354008B2 (en) * 2016-08-10 2022-06-07 Microsoft Technology Licensing, Llc Visual notification

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1685739A (zh) * 2002-09-26 2005-10-19 精工爱普生株式会社 图象数据的输出图象调整
US20090200451A1 (en) * 2008-02-08 2009-08-13 Micron Technology, Inc. Color pixel arrays having common color filters for multiple adjacent pixels for use in cmos imagers
CN101998048A (zh) * 2009-08-05 2011-03-30 三星电子株式会社 数字图像信号处理方法和数字图像信号处理设备
CN103765876A (zh) * 2011-08-31 2014-04-30 索尼公司 图像处理设备以及图像处理方法和程序
CN103531603A (zh) * 2013-10-30 2014-01-22 上海集成电路研发中心有限公司 一种cmos图像传感器
CN106454289A (zh) * 2016-11-29 2017-02-22 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置

Also Published As

Publication number Publication date
EP3327781B1 (en) 2019-12-25
EP3327781A1 (en) 2018-05-30
US10348962B2 (en) 2019-07-09
US20180152632A1 (en) 2018-05-31
CN106454289A (zh) 2017-02-22
US20180249074A1 (en) 2018-08-30
US10110809B2 (en) 2018-10-23
CN106454289B (zh) 2018-01-23
ES2774493T3 (es) 2020-07-21

Similar Documents

Publication Publication Date Title
WO2018099008A1 (zh) 控制方法、控制装置及电子装置
WO2018099007A1 (zh) 控制方法、控制装置及电子装置
WO2018099010A1 (zh) 控制方法、控制装置和电子装置
WO2018099005A1 (zh) 控制方法、控制装置及电子装置
WO2018099006A1 (zh) 控制方法、控制装置及电子装置
WO2018099012A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
WO2018099011A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
WO2018098982A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
JP6878604B2 (ja) 撮像方法および電子装置
WO2018099030A1 (zh) 控制方法和电子装置
WO2018099031A1 (zh) 控制方法和电子装置
WO2018098977A1 (zh) 图像处理方法、图像处理装置、成像装置、制造方法和电子装置
WO2018099009A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
TW201724842A (zh) 圖像傳感器及輸出方法、相位對焦方法、成像裝置和終端
WO2018098981A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2018098978A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2018098984A1 (zh) 控制方法、控制装置、成像装置及电子装置
CN102801918B (zh) 摄像设备和用于控制该摄像设备的方法
WO2018098983A1 (zh) 图像处理方法及装置、控制方法及装置、成像及电子装置
US20140063288A1 (en) Imaging apparatus, electronic device and method providing exposure compensation
WO2018196703A1 (zh) 图像传感器、对焦控制方法、成像装置和移动终端
WO2019105260A1 (zh) 景深获取方法、装置及设备
CN106506984B (zh) 图像处理方法及装置、控制方法及装置、成像及电子装置
CN106507018B (zh) 控制方法、控制装置及电子装置
CN106504217B (zh) 图像处理方法、图像处理装置、成像装置及电子装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17876474

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17876474

Country of ref document: EP

Kind code of ref document: A1