WO2018098981A1 - 控制方法、控制装置、电子装置和计算机可读存储介质 - Google Patents

控制方法、控制装置、电子装置和计算机可读存储介质 Download PDF

Info

Publication number
WO2018098981A1
WO2018098981A1 PCT/CN2017/081916 CN2017081916W WO2018098981A1 WO 2018098981 A1 WO2018098981 A1 WO 2018098981A1 CN 2017081916 W CN2017081916 W CN 2017081916W WO 2018098981 A1 WO2018098981 A1 WO 2018098981A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
unit
original
color
Prior art date
Application number
PCT/CN2017/081916
Other languages
English (en)
French (fr)
Inventor
韦怡
Original Assignee
广东欧珀移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东欧珀移动通信有限公司 filed Critical 广东欧珀移动通信有限公司
Publication of WO2018098981A1 publication Critical patent/WO2018098981A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3871Composing, repositioning or otherwise geometrically modifying originals the composed originals being of different kinds, e.g. low- and high-resolution originals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4015Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/581Control of the dynamic range involving two or more exposures acquired simultaneously
    • H04N25/585Control of the dynamic range involving two or more exposures acquired simultaneously with pixels having different sensitivities within the sensor, e.g. fast or slow pixels or pixels having different sizes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/77Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2209/00Details of colour television systems
    • H04N2209/04Picture signal generators
    • H04N2209/041Picture signal generators using solid-state devices
    • H04N2209/042Picture signal generators using solid-state devices having a single pick-up sensor
    • H04N2209/045Picture signal generators using solid-state devices having a single pick-up sensor using mosaic colour filter
    • H04N2209/046Colour interpolation to calculate the missing colour values

Definitions

  • the present invention relates to image processing technology, and more particularly to a control method, a control device, an electronic device, and a computer readable storage medium.
  • An existing image sensor includes a pixel unit array and a filter unit array disposed on the pixel unit array, each filter unit array covering a corresponding one of the photosensitive pixel units, and each of the photosensitive pixel units includes a plurality of photosensitive pixels.
  • the image sensor exposure output merged image may be controlled, and the merged image includes a merged pixel array, and the plurality of photosensitive pixels of the same pixel unit are combined and output as one merged pixel. In this way, the signal-to-noise ratio of the merged image can be improved, however, the resolution of the merged image is lowered.
  • the image sensor may also be controlled to output a high-pixel patch image
  • the patch image includes an original pixel array, and each photosensitive pixel corresponds to one original pixel.
  • the resolution of the patch image cannot be improved. Therefore, it is necessary to convert a high-pixel patch image into a high-pixel pseudo-image by interpolation calculation, and the pseudo-image may include a pseudo-primary pixel arranged in a Bayer array.
  • the original image can be converted into a true color image by a control method and saved. Interpolation calculation can improve the sharpness of true color images, but it is resource-intensive and time-consuming, resulting in longer shooting time.
  • high-definition processing of certain parts of true color images, such as human faces will Reduce the user experience.
  • Embodiments of the present invention provide a control method, a control device, an electronic device, and a computer readable storage medium.
  • a control method of an embodiment of the present invention for controlling an electronic device, the electronic device comprising an imaging device and a touch screen, the imaging device comprising an image sensor, the image sensor comprising an array of photosensitive pixel units and an array disposed on the photosensitive pixel unit An array of filter cells, each of the filter cell arrays covering a corresponding one of the photosensitive pixel units, each of the photosensitive pixel unit arrays comprising a plurality of photosensitive pixels, the control method comprising the steps of:
  • Converting the patch image into a pseudo original image by using a first interpolation algorithm comprises the following steps:
  • the pixel value of the associated pixel is used as the pixel value of the current pixel
  • a control device for controlling an electronic device, the electronic device comprising an imaging device and a touch screen, the imaging device comprising an image sensor, the image sensor comprising an array of photosensitive pixel units and an array disposed on the photosensitive pixel unit An array of filter cells, each of the filter cell arrays covering a corresponding one of the photosensitive pixel units, each of the photosensitive pixel unit arrays comprising a plurality of photosensitive pixels, the control device comprising: an output module, and an identification Module, conversion module, merge module.
  • the output module is configured to control the image sensor to output a merged image and a patch image of the same scene, the merged image includes a merged pixel array, and the plurality of photosensitive pixels of the same photosensitive pixel unit are combined and output as one of the merged pixels
  • the patch image includes image pixel units arranged in a predetermined array, the image pixel unit including a plurality of original pixels, each of the photosensitive pixels corresponding to one of the original pixels; the identification module is configured to merge according to the An image recognition face area; the conversion module is configured to convert the color block image into a pseudo original image by using a first interpolation algorithm, where the original image includes an array of original pixels, the original pixel includes a current a pixel, the original pixel includes an associated pixel corresponding to the current pixel, and the conversion module includes: a first determining unit, a second determining unit, a first calculating unit, a second calculating unit, and a third calculating unit, The first determining unit is configured to determine whether the associated
  • An electronic device includes an imaging device, a touch screen, and the above-described control device.
  • An electronic device includes a housing, a processor, a memory, a circuit board, and a power supply circuit.
  • the circuit board is disposed inside a space enclosed by the casing, the processor and the memory are disposed on the circuit board; and the power circuit is configured to supply power to each circuit or device of the electronic device;
  • the memory is for storing executable program code; the processor runs a program corresponding to the executable program code by reading executable program code stored in the memory for executing the control method described above.
  • a computer readable storage medium in accordance with an embodiment of the present invention has instructions stored therein.
  • the processor of the electronic device executes the instruction, the electronic device performs the control method described above.
  • the control method, the control device, the electronic device and the computer readable storage medium of the embodiment of the present invention process the image outside the face region in the patch image by using a first interpolation algorithm to improve the resolution of the image outside the face region. And the resolution, and the second interpolation algorithm with complexity less than the first interpolation algorithm is processed on the merged image, and the image processing ratio, resolution and resolution are improved, and the required processing data and processing time are reduced, and the processing time is improved. user experience.
  • FIG. 1 is a schematic flow chart of a control method according to an embodiment of the present invention.
  • FIG. 2 is another schematic flowchart of a control method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of functional modules of a control device according to an embodiment of the present invention.
  • FIG. 4 is a schematic block diagram of an image sensor according to an embodiment of the present invention.
  • FIG. 5 is a circuit diagram of an image sensor according to an embodiment of the present invention.
  • FIG. 6 is a schematic view of a filter unit according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of an image sensor according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a merged image state according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram showing a state of a patch image according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram showing a state of a control method according to an embodiment of the present invention.
  • FIG. 11 is a schematic flow chart of a control method according to some embodiments of the present invention.
  • FIG. 12 is a schematic diagram of functional modules of a second computing unit according to some embodiments of the present invention.
  • FIG. 13 is a schematic flow chart of a control method according to some embodiments of the present invention.
  • FIG. 14 is a schematic diagram of functional blocks of a control device according to some embodiments of the present invention.
  • 15 is a schematic diagram of a state of image merging of a control method according to some embodiments of the present invention.
  • 16 is a schematic diagram of functional modules of an electronic device according to some embodiments of the present invention.
  • 17 is a schematic diagram of functional modules of an electronic device according to some embodiments of the present invention.
  • the electronic device 100 includes an imaging device 20 and a touch screen 30.
  • the imaging device 20 includes an image sensor 21, and the image sensor 21 includes The photosensitive pixel unit array 212 and the filter unit array 211 disposed on the photosensitive pixel unit array 212, each of the filter unit arrays 211 covers a corresponding one of the photosensitive pixel units 212a, and each of the photosensitive pixel unit arrays 212 includes a plurality of photosensitive pixels. 2121.
  • the control method includes the following steps:
  • the control image sensor 21 outputs a merged image and a patch image of the same scene, the merged image includes a merged pixel array, and the plurality of photosensitive pixels 2121 of the same photosensitive pixel unit 212a are combined and output as one merged pixel, and the patch image includes a predetermined array arrangement.
  • Image pixel unit, the image pixel unit includes a plurality of original pixels, each photosensitive pixel corresponds to one original pixel, and each photosensitive pixel corresponds to one original pixel;
  • S13 Converting the color block image into a pseudo original image by using a first interpolation algorithm, where the original image includes an original pixel arranged in an array, the original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel, and the first pixel is used.
  • the step of converting the patch image into the original image by the interpolation algorithm comprises the following steps:
  • S133 determining whether the color of the current pixel is the same as the color of the associated pixel when the associated pixel is outside the face region;
  • the pixel value of the current pixel is calculated according to the pixel value of the associated pixel unit by using a first interpolation algorithm, where the image pixel unit includes an associated pixel unit, and the color of the associated pixel unit is related to the current pixel. Same and adjacent to the current pixel; and
  • S139 Convert the merged image into a restored image corresponding to the original image by using a second interpolation algorithm, where the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm;
  • control method of the embodiment of the present invention may be implemented by the control device 10 of the embodiment of the present invention.
  • the control device 10 is for controlling the electronic device 100.
  • the electronic device 100 includes an imaging device 20 and a touch screen 30.
  • the imaging device 20 includes an image sensor 21 including a photosensitive pixel unit array 212 and a filter disposed on the photosensitive pixel unit array 212.
  • Each of the filter unit arrays 211 covers a corresponding one of the photosensitive pixel units 212a
  • each of the photosensitive pixel unit arrays 212 includes a plurality of photosensitive pixels 2121
  • the control device 10 includes an output module 11, an identification module 12, and a conversion module 13 And merge module 14.
  • the output module 11 is configured to control the image sensor 21 to output a merged image and a patch image of the same scene.
  • the merged image includes a merged pixel array, and the plurality of photosensitive pixels 2121 of the same photosensitive pixel unit 212a are combined and output as one merged pixel, and the patch image includes a predetermined image.
  • An image pixel unit arranged in an array the image pixel unit includes a plurality of original pixels, each photosensitive pixel 2121 corresponds to one original pixel; the recognition module 12 is configured to identify a face region according to the merged image; and the conversion module 13 is configured to utilize the first interpolation algorithm Converting the color patch image into a real-time image, the original image includes an original pixel arranged in an array, the original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel, and the conversion module 13 includes a first determining unit 131, The second determining unit 133, the first calculating unit 135, the second calculating unit 137, and the third calculating unit 139, the first determining unit 131 is configured to determine whether the associated pixel is located
  • the pixel value of the current pixel is calculated by the first interpolation algorithm according to the pixel value of the associated pixel unit, and the image pixel unit includes the associated pixel unit, and the color of the associated pixel unit is the same as the current pixel and adjacent to the current pixel;
  • the unit 139 is configured to convert the merged image into a restored image corresponding to the original image by using a second interpolation algorithm, where the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm;
  • the merge module 14 is configured to merge the original image and restore the image to obtain Merge the original image.
  • step S11 can be implemented by the output module 11, and step S12 can be implemented by the identification module 12.
  • step S13 can be implemented by the conversion module 13
  • step S131 can be implemented by the first determining unit 131
  • step S133 can be implemented by the second determining unit 133
  • step S135 can be implemented by the first calculating unit 135
  • step S137 can be performed by the second calculating unit.
  • step S139 can be implemented by the third computing unit 139
  • step S14 can be implemented by the merging module 14.
  • the control method of the embodiment of the present invention processes the image outside the face region of the patch image by using the first interpolation algorithm, and processes the merged image with the second interpolation algorithm.
  • the complexity of the second interpolation algorithm includes time complexity and spatial complexity. Compared with the first interpolation algorithm, the time complexity and spatial complexity of the second interpolation algorithm are smaller than the first interpolation algorithm. In this way, only a part of the image is processed by the first interpolation algorithm with large complexity, which reduces the time required for processing the data and processing, improves the quality of the partial image, and at the same time, the image clarity of the face region. Compared with the low resolution of the image outside the face area, the user can have a better shooting experience.
  • the image sensor 21 of the embodiment of the present invention includes a photosensitive pixel unit array 212 and a filter unit array 211 disposed on the photosensitive pixel unit array 212.
  • the photosensitive pixel unit array 212 includes a plurality of photosensitive pixel units 212a, each of which includes a plurality of adjacent photosensitive pixels 2121.
  • Each of the photosensitive pixels 2121 includes a photosensitive device 21211 and a transfer tube 21212, wherein the photosensitive device 21211 can be a photodiode, and the transfer tube 21212 can be a MOS transistor.
  • the filter unit array 211 includes a plurality of filter units 211a, each of which covers a corresponding one of the photosensitive pixel units 212a.
  • the filter cell array 211 includes a Bayer array, that is, the adjacent four filter units 211a are respectively a red filter unit and a blue filter unit. And two green filter units.
  • Each of the photosensitive pixel units 212a corresponds to the filter 211a of the same color. If one photosensitive pixel unit 212a includes a total of n adjacent photosensitive devices 21211, one filter unit 211a covers n of one photosensitive pixel unit 212a.
  • the photosensitive device 21211 may have an integral structure or may be assembled and connected by n independent sub-filters.
  • each photosensitive pixel unit 212a includes four adjacent photosensitive pixels 2121, and two adjacent photosensitive pixels 2121. together constitute one photosensitive pixel sub-unit 2120.
  • the photosensitive pixel sub-unit 2120 further includes a source follower.
  • the photosensitive pixel unit 212a further includes an adder 2122. Wherein one end electrode of each of the one of the photosensitive pixel sub-units 2120 is connected to the cathode electrode of the corresponding photosensitive device 21211, and the other end of each of the transfer tubes 21212 is commonly connected to the gate electrode of the source follower 21213. And connected to an analog to digital converter 21214 via a source follower 21213 source electrode.
  • the source follower 21213 may be a MOS transistor.
  • the two photosensitive pixel subunits 2120 are connected to the adder 2122 through respective source followers 21213 and analog to digital converters 21214.
  • the adjacent four photosensitive devices 21211 of one photosensitive pixel unit 212a of the image sensor 21 of the embodiment of the present invention share a filter unit 211a of the same color, and each photosensitive device 21211 is connected to a transmission tube 21212.
  • the two adjacent photosensitive devices 21211 share a source follower 21213 and an analog to digital converter 21214, and the adjacent four photosensitive devices 21211 share an adder 2122.
  • adjacent four photosensitive devices 21211 are arranged in a 2*2 array.
  • the two photosensitive devices 21211 in one photosensitive pixel subunit 2120 may be in the same column.
  • the pixels may be combined to output a combined image.
  • the photosensitive device 21211 is configured to convert illumination into electric charge, and the generated electric charge is proportional to the illumination intensity, and the transmission tube 21212 is configured to control the on or off of the circuit according to the control signal.
  • the source follower 21213 is configured to convert the charge signal generated by the light-sensing device 21211 into a voltage signal.
  • Analog to digital converter 21214 is used to convert the voltage signal to a digital signal.
  • the adder 2122 is for summing the two digital signals for common output for processing by the image processing module connected to the image sensor 21.
  • the image sensor 21 of the embodiment of the present invention can combine 16M photosensitive pixels into 4M, or output a combined image.
  • the size of the photosensitive pixels is equivalent to change. It is 4 times the original size, which improves the sensitivity of the photosensitive pixels.
  • the noise in the image sensor 21 is mostly random noise, it is possible that for the photosensitive pixels before the combination, there is a possibility that noise is present in one or two pixels, and the four photosensitive pixels are combined into one large photosensitive light. After the pixel, the influence of the noise on the large pixel is reduced, that is, the noise is weakened, and the signal-to-noise ratio is improved.
  • the resolution of the merged image will also decrease as the pixel value decreases.
  • the patch image can be output through image processing.
  • the photosensitive device 21211 is for converting illumination into electric charge, and the generated electric charge is proportional to the intensity of the illumination, and the transmission tube 21212 is for controlling the on or off of the circuit according to the control signal.
  • the source follower 21213 is configured to convert the charge signal generated by the light-sensing device 21211 into a voltage signal.
  • Analog to digital converter 21214 is used to convert the voltage signal to a digital signal for processing by an image processing module coupled to image sensor 21.
  • the image sensor 21 of the embodiment of the present invention can also hold a 16M photosensitive pixel output, or an output patch image, and the patch image includes an image pixel unit.
  • the image pixel unit includes original pixels arranged in a 2*2 array, the size of the original pixel being the same as the size of the photosensitive pixel, but since the filter unit 211a covering the adjacent four photosensitive devices 21211 is the same color, that is, Although the four photosensitive devices 21211 are respectively exposed, the color filter units 211a are the same in color, and therefore, the adjacent four original pixels of each image pixel unit are output in the same color, and the resolution of the image cannot be improved.
  • the control method of the embodiment of the present invention is configured to process the output patch image to obtain a pseudo original image.
  • the processing module receives the processing to output a true color image.
  • the color patch image is outputted separately for each photosensitive pixel at the time of output. Since the adjacent four photosensitive pixels have the same color, the four adjacent original pixels of one image pixel unit have the same color and are atypical Bayer arrays.
  • the image processing module cannot directly process the atypical Bayer array, that is, when the image sensor 21 adopts the unified image processing mode, the true color image output in the merge mode is compatible with the two modes of true color image output and The true color image output in the color block mode needs to convert the color block image into a pseudo original image, or convert the image pixel unit of the atypical Bayer array into a pixel arrangement of a typical Bayer array.
  • the original image includes imitation original pixels arranged in a Bayer array.
  • the pseudo original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel.
  • the control method of the embodiment of the present invention outputs a patch image and a merged image, respectively, and recognizes a face region on the merged image.
  • the image outside the face region of the patch image is first converted into a Bayer image array, and then the first interpolation is performed.
  • the algorithm performs image processing. Specifically, referring to FIG. 10, taking FIG. 10 as an example, the current pixels are R3'3' and R5'5', and the corresponding associated pixels are R33 and R55, respectively.
  • the pixel values above and below should be broadly understood as the color attribute values of the pixel, such as color values.
  • the associated pixel unit includes a plurality of, for example, four, original pixels in the image pixel unit that are the same color as the current pixel and are adjacent to the current pixel.
  • the associated pixel corresponding to R5'5' is B55, which is adjacent to the image pixel unit where B55 is located and has the same color as R5'5'.
  • the image pixel units in which the associated pixel unit is located are image pixel units in which R44, R74, R47, and R77 are located, and are not other red image pixel units that are spatially farther from the image pixel unit in which B55 is located.
  • red original pixels closest to the B55 are R44, R74, R47 and R77, respectively, that is, the associated pixel unit of R5'5' is composed of R44, R74, R47 and R77, R5'5'
  • the colors are the same as and adjacent to R44, R74, R47 and R77.
  • the original pixel is converted into the original pixel in different ways, thereby converting the color block image into the original image, and a special Bayer array structure filter is adopted when the image is captured.
  • the image signal-to-noise ratio is improved, and in the image processing process, the color block image is interpolated by the first interpolation algorithm, thereby improving the resolution and resolution of the image.
  • step S137 includes the following steps:
  • S1373 Calculate the pixel value of the current pixel according to the amount of the gradient and the weight.
  • the second calculating unit 137 includes a first calculating subunit 1371, a second calculating subunit 1372, and a third calculating subunit 1373.
  • the first calculating sub-unit 1371 is configured to calculate the amount of gradation in each direction of the associated pixel; the second calculating sub-unit 1372 is for calculating the weight in each direction of the associated pixel; the third calculating sub-unit 1373 is configured to calculate the current based on the gradation amount and the weight The pixel value of the pixel.
  • step S1371 can be implemented by the first calculation sub-unit 1371
  • step S1372 can be implemented by the second calculation sub-unit 1372
  • step S1373 can be implemented by the third calculation sub-unit 1373.
  • the first interpolation algorithm is an energy gradation of the reference image in different directions, and the color corresponding to the current pixel is the same and the adjacent associated pixel unit is calculated by linear interpolation according to the gradation weight in different directions.
  • the pixel value of the current pixel in the direction in which the amount of change in energy is small, the reference specific gravity is large, and therefore, the weight at the time of interpolation calculation is large.
  • R5'5' is interpolated from R44, R74, R47 and R77, and there are no original pixels of the same color in the horizontal and vertical directions, so the components of the color in the horizontal and vertical directions are calculated from the associated pixel unit.
  • the components in the horizontal direction are R45 and R75
  • the components in the vertical direction are R54 and R57 which can be calculated by R44, R74, R47 and R77, respectively.
  • R45 R44*2/3+R47*1/3
  • R75 2/3*R74+1/3*R77
  • R54 2/3*R44+1/3*R74
  • R57 2/3 *R47+1/3*R77.
  • the amount of gradation and the weight in the horizontal and vertical directions are respectively calculated, that is, the gradation amount in different directions according to the color is determined to determine the reference weights in different directions at the time of interpolation, and the weight is smaller in the direction of the gradation amount. Large, and in the direction of larger gradient, the weight is smaller.
  • the gradient amount X1
  • the gradient amount X2
  • W1 X1/(X1+X2)
  • W2 X2/(X1+X2) .
  • R5'5' (2/3*R45+1/3*R75)*W2+(2/3*R54+1/3*R57)*W1. It can be understood that if X1 is greater than X2, W1 is greater than W2, so the weight in the horizontal direction is W2 when calculating, and the weight in the vertical direction is W1, and vice versa.
  • the pixel value of the current pixel can be calculated according to the first interpolation algorithm.
  • the original pixel can be converted into a pseudo original pixel arranged in a typical Bayer array, that is, the adjacent original pixels of the four 2*2 arrays include a red original pixel. , two green imitation original pixels and one blue imitation original pixel.
  • the first interpolation algorithm includes, but is not limited to, a manner in which only pixel values of the same color in both the vertical and horizontal directions are considered in the calculation, and for example, reference may also be made to pixel values of other colors.
  • step S137 the steps are included before step S137:
  • Step S137 includes steps:
  • S138a Perform white balance compensation and restoration on the original image.
  • the conversion module 13 includes a white balance compensation unit 136a and a white balance compensation reduction unit 138a.
  • the white balance compensation unit 136a is configured to perform white balance compensation on the patch image
  • the white balance compensation and restoration unit 138a is configured to perform white balance compensation and restoration on the original image.
  • step S136a can be implemented by the white balance compensation unit 136a
  • step S138a can be implemented by the white balance compensation restoration unit 138a.
  • the red and blue imitation original pixels often refer not only to the color of the original pixel of the channel whose color is the same, but also refer to The color weight of the original pixels of the green channel, therefore, white balance compensation is required before interpolation to eliminate the effects of white balance in the interpolation calculation.
  • white balance compensation In order not to destroy the white balance of the patch image, it is necessary to perform white balance compensation reduction after the interpolation, and restore according to the gain values of red, green and blue in the compensation.
  • the step S137 includes steps:
  • the conversion module 13 includes a dead point compensation unit 136b.
  • step S136b can be implemented by the dead point compensation unit 136b.
  • the image sensor 21 may have a dead pixel.
  • the bad point usually does not always show the same color as the sensitivity changes, and the presence of the dead pixel will affect the image quality. Therefore, in order to ensure accurate interpolation, The effect of the dead point requires bad point compensation before interpolation.
  • the original pixel may be detected.
  • the pixel compensation may be performed according to the pixel value of the other original image of the image pixel unit in which it is located.
  • the step S137 includes steps:
  • the conversion module 13 includes a crosstalk compensation unit 136c.
  • step S136c can be implemented by the dead-point crosstalk unit 136c.
  • the four photosensitive pixels in one photosensitive pixel unit cover the filter of the same color, and there may be a difference in sensitivity between the photosensitive pixels, so that the solid color region in the true color image converted by the original image is Fixed spectral noise occurs, affecting the quality of the image. Therefore, it is necessary to perform crosstalk compensation on the patch image.
  • step S137 includes steps:
  • S138b Perform lens shading correction, demosaicing, noise reduction, and edge sharpening on the original image.
  • the conversion module 13 further includes a processing unit 138b.
  • step S138b can be implemented by the processing unit 138b.
  • the original pixel is arranged as a typical Bayer array, and the processing unit 138b can be used for processing, including lens shadow correction, demosaicing, noise reduction and edge sharpening. Processing, in this way, the true color image can be output to the user after processing.
  • the image sensor 21 combines the 16M photosensitive pixels into a 4M direct output, so that the subsequent patch image and merge are facilitated.
  • the merged image is first stretched and enlarged by the second interpolation algorithm to be converted into a restored image having the same size as the patch image.
  • the image outside the face region of the patch image is processed into a pseudo original image by using a first interpolation algorithm, and the merged image is processed into a restored image by using a second interpolation algorithm, and then the two images are combined to obtain a merged imitation image.
  • Original image The images outside the face region in the merged original image have a higher resolution.
  • an electronic device 100 of an embodiment of the present invention includes a control device 10, an imaging device 20, and a touch screen 30.
  • electronic device 100 includes a cell phone and a tablet.
  • Both the mobile phone and the tablet computer have a camera, that is, an imaging device 20.
  • the control method of the embodiment of the present invention can be used to obtain a high-resolution picture.
  • the electronic device 100 also includes other electronic devices having a photographing function.
  • the control method of the embodiment of the present invention is one of the designated processing modes in which the electronic device 100 performs image processing. That is to say, when the user performs the photographing by the electronic device 100, it is necessary to select various designated processing modes included in the electronic device 100.
  • the user selects the designated processing mode of the embodiment of the present invention, the user can select the predetermined region autonomously.
  • the electronic device 100 performs image processing using the control method of the embodiment of the present invention.
  • imaging device 20 includes a front camera and a rear camera.
  • many electronic devices 100 include a front camera and a rear camera.
  • the front camera and the rear camera can implement image processing by using the control method of the embodiment of the present invention to improve the user experience.
  • an electronic device 100 includes a processor 40, a memory 50, a circuit board 60, a power supply circuit 70, and a housing 80.
  • the circuit board 60 is disposed inside the space enclosed by the casing 80, the processor 40 and the memory 50 are disposed on the circuit board 60; the power supply circuit 70 is used to supply power to the respective circuits or devices of the electronic device 100; and the memory 50 is used for storing
  • the program code is executable by the processor 40 by executing the executable code stored in the memory 50 to execute the program corresponding to the executable program code to implement the control method of any of the above embodiments of the present invention.
  • processor 40 can be used to perform the following steps:
  • the control image sensor 21 outputs a merged image and a patch image of the same scene, the merged image includes a merged pixel array, and the plurality of photosensitive pixels 2121 of the same photosensitive pixel unit 212a are combined and output as one merged pixel, and the patch image includes a predetermined array arrangement.
  • Image pixel unit, the image pixel unit includes a plurality of original pixels, each photosensitive pixel corresponds to one original pixel, and each photosensitive pixel corresponds to one original pixel;
  • S13 Converting the color block image into a pseudo original image by using a first interpolation algorithm, where the original image includes an original pixel arranged in an array, the original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel, and the first pixel is used.
  • the step of converting the patch image into the original image by the interpolation algorithm comprises the following steps:
  • S133 determining whether the color of the current pixel is the same as the color of the associated pixel when the associated pixel is outside the face region;
  • S139 Convert the merged image into a restored image corresponding to the original image by using a second interpolation algorithm, where the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm;
  • a readable storage medium in accordance with an embodiment of the present invention has instructions stored therein.
  • the processor 40 of the electronic device 100 executes the instruction, the electronic device 100 performs the control method described in any of the embodiments of the present invention described above.
  • the electronic device 100 can perform the following steps:
  • the control image sensor 21 outputs a merged image and a patch image of the same scene, the merged image includes a merged pixel array, and the plurality of photosensitive pixels 2121 of the same photosensitive pixel unit 212a are combined and output as one merged pixel, and the patch image includes a predetermined array arrangement.
  • Image pixel unit, the image pixel unit includes a plurality of original pixels, each photosensitive pixel corresponds to one original pixel, and each photosensitive pixel corresponds to one original pixel;
  • S13 Converting the color block image into a pseudo original image by using a first interpolation algorithm, where the original image includes an original pixel arranged in an array, the original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel, and the first pixel is used.
  • the step of converting the patch image into the original image by the interpolation algorithm comprises the following steps:
  • S133 determining whether the color of the current pixel is the same as the color of the associated pixel when the associated pixel is outside the face region;
  • the pixel value of the current pixel is calculated according to the pixel value of the associated pixel unit by using a first interpolation algorithm, where the image pixel unit includes an associated pixel unit, and the color of the associated pixel unit is related to the current pixel. Same and adjacent to the current pixel; and
  • S139 Convert the merged image into a restored image corresponding to the original image by using a second interpolation algorithm, where the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm;
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be performed by software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if executed in hardware, as in another embodiment, it can be performed by any one of the following techniques or combinations thereof known in the art: having logic gates for performing logic functions on data signals Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be executed in the form of hardware or in the form of software functional modules.
  • the integrated modules, if executed in the form of software functional modules and sold or used as separate products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

本发明公开了一种控制方法、控制装置、电子装置和计算机可读存储介质。首先,控制图像传感器输出同一场景的合并图像和色块图像;其次,根据合并图像识别人脸区域;随后,利用第一插值算法处理色块图像的人脸区域外的图像,利用第二插值算法处理合并图像;最后将两帧经插值处理后的图像进行合并。本发明实施方式的控制方法、控制装置和电子装置通过对人脸区域的识别,对色块图像的人脸区域外的图像进行图像处理,从而获得高质量的图像,同时避免了对整帧图像进行图像处理带来的大量工作,提高了工作效率。

Description

控制方法、控制装置、电子装置和计算机可读存储介质
优先权信息
本申请请求2016年11月29日向中国国家知识产权局提交的、专利申请号为201611078876.3的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本发明涉及图像处理技术,特别涉及一种控制方法、控制装置、电子装置和计算机可读存储介质。
背景技术
现有的一种图像传感器包括像素单元阵列和设置在像素单元阵列上的滤光片单元阵列,每个滤光片单元阵列覆盖对应一个感光像素单元,每个感光像素单元包括多个感光像素。工作时,可以控制图像传感器曝光输出合并图像,合并图像包括合并像素阵列,同一像素单元的多个感光像素合并输出作为一个合并像素。如此,可以提高合并图像的信噪比,然而,合并图像的解析度降低。当然,也可以控制图像传感器曝光输出高像素的色块图像,色块图像包括原始像素阵列,每个感光像素对应一个原始像素。然而,由于同一滤光片单元对应的多个原始像素颜色相同,同样无法提高色块图像的解析度。因此,需要通过插值计算的方式将高像素的色块图像转换成高像素的仿原图像,仿原图像可以包括呈拜耳阵列排布的仿原像素。仿原图像可以通过控制方法转换成真彩图像并保存下来。插值计算可以提高真彩图像的清晰度,然而耗费资源且耗时,导致拍摄时间加长,此外,在实际应用中,对真彩图像的某些部分,如人脸等进行高清晰度处理反而会降低用户体验。
发明内容
本发明的实施例提供一种控制方法、控制装置、电子装置和计算机可读存储介质。
本发明实施方式的控制方法,用于控制电子装置,所述电子装置包括成像装置和触摸屏,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元阵列包括多个感光像素,所述控制方法包括以下步骤:
控制所述图像传感器输出同一场景的合并图像和色块图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所 述感光像素对应一个所述原始像素,每个所述感光像素对应一个所述原始像素;
根据所述合并图像识别人脸区域;
利用第一插值算法将所述色块图像转换成仿原图像,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述利用第一插值算法将所述色块图像转换成仿原图像的步骤包括以下步骤:
判断所述关联像素是否位于所述人脸区域外;
在所述关联像素位于所述人脸区域外时判断所述当前像素的颜色与所述关联像素的颜色是否相同;
在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;和
在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
将所述合并图像通过第二插值算法转换成与所述仿原图像对应的还原图像,所述第二插值算法的复杂度小于所述第一插值算法;和
合并所述仿原图像及所述还原图像以得到所述合并仿原图像。
本发明实施方式的控制装置,用于控制电子装置,所述电子装置包括成像装置和触摸屏,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元阵列包括多个感光像素,所述控制装置包括:输出模块、识别模块、转换模块、合并模块。所述输出模块用于控制所述图像传感器输出同一场景的合并图像和色块图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素对应一个所述原始像素;所述识别模块用于根据所述合并图像识别人脸区域;所述转换模块用于利用第一插值算法将所述色块图像转换成仿原图像,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述转换模块包括:第一判断单元、第二判断单元、第一计算单元、第二计算单元、第三计算单元,所述第一判断单元用于判断所述关联像素是否位于所述人脸区域外;所述第二判断单元用于在所述关联像素位于所述人脸区域外时判断所述当前像素的颜色与所述关联像素的颜色是否相同;所述第一计算单元用于在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;所述第二计算单元用于在所述当前像素的颜色与所 述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;所述第三计算单元用于将所述合并图像通过第二插值算法转换成与所述仿原图像对应的还原图像,所述第二插值算法的复杂度小于所述第一插值算法;所述合并模块用于合并所述仿原图像及所述还原图像以得到所述合并仿原图像。
本发明实施方式的电子装置包括成像装置、触摸屏和上述的控制装置。
本发明实施方式的电子装置包括壳体、处理器、存储器、电路板和电源电路。所述电路板安置在所述壳体围成的空间内部,所述处理器和所述存储器设置在所述电路板上;所述电源电路用于为所述电子装置的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行上述的控制方法。
本发明实施方式的计算机可读存储介质,具有存储于其中的指令。当电子装置的处理器执行所述指令时,所述电子装置执行上述的控制方法。
本发明实施方式的控制方法、控制装置、电子装置和计算机可读存储介质,对色块图像中人脸区域外的图像采用第一插值算法进行处理,以提高人脸区域外的图像的分辨率及解析度,且对合并图像采用复杂度小于第一插值算法的第二插值算法进行处理,在提高图像信噪比、分辨率和解析度的同时,减少所需处理的数据和处理时间,提升用户体验。
本发明的实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实施方式的实践了解到。
附图说明
本发明的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本发明实施方式的控制方法的流程示意图;
图2是本发明实施方式的控制方法的另一流程示意图;
图3是本发明实施方式的控制装置的功能模块示意图;
图4是本发明实施方式的图像传感器的模块示意图;
图5是本发明实施方式的图像传感器的电路示意图;
图6是本发明实施方式的滤光片单元的示意图;
图7是本发明实施方式的图像传感器的结构示意图;
图8是本发明实施方式的合并图像状态示意图;
图9是本发明实施方式的色块图像的状态示意图;
图10是本发明实施方式的控制方法的状态示意图;
图11是本发明某些实施方式的控制方法的流程示意图;
图12是本发明某些实施方式的第二计算单元的功能模块示意图;
图13是本发明某些实施方式的控制方法的流程示意图;
图14是本发明某些实施方式的控制装置的功能模块示意图;
图15是本发明某些实施方式的控制方法的图像合并的状态示意图;
图16是本发明某些实施方式的电子装置的功能模块示意图;
图17是本发明某些实施方式的电子装置的功能模块示意图。
具体实施方式
下面详细描述本发明的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。
请一并参阅图1至5和图16,本发明实施方式的控制方法,用于控制电子装置100,电子装置100包括成像装置20和触摸屏30,成像装置20包括图像传感器21,图像传感器21包括感光像素单元阵列212和设置在感光像素单元阵列212上的滤光片单元阵列211,每个滤光片单元阵列211覆盖对应一个感光像素单元212a,每个感光像素单元阵列212包括多个感光像素2121,控制方法包括以下步骤:
S11:控制图像传感器21输出同一场景的合并图像和色块图像,合并图像包括合并像素阵列,同一感光像素单元212a的多个感光像素2121合并输出作为一个合并像素,色块图像包括预定阵列排布的图像像素单元,图像像素单元包括多个原始像素,每个感光像素对应一个原始像素,每个感光像素对应一个原始像素;
S12:根据合并图像识别人脸区域;
S13:利用第一插值算法将色块图像转换成仿原图像,仿原图像包括阵列排布的仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素,利用第一插值算法将色块图像转换成仿原图像的步骤包括以下步骤:
S131:判断关联像素是否位于人脸区域外;
S133:在关联像素位于人脸区域外时判断当前像素的颜色与关联像素的颜色是否相同;
S135:在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当 前像素的像素值;和
S137:在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;和
S139:将合并图像通过第二插值算法转换成与仿原图像对应的还原图像,第二插值算法的复杂度小于第一插值算法;和
S14:合并仿原图像及还原图像以得到合并仿原图像。
请参阅图3,本发明实施方式的控制方法可以由本发明实施方式的控制装置10实现。
控制装置10用于控制电子装置100,电子装置100包括成像装置20和触摸屏30,成像装置20包括图像传感器21,图像传感器21包括感光像素单元阵列212和设置在感光像素单元阵列212上的滤光片单元阵列211,每个滤光片单元阵列211覆盖对应一个感光像素单元212a,每个感光像素单元阵列212包括多个感光像素2121,控制装置10包括输出模块11、识别模块12、转换模块13和合并模块14。输出模块11用于控制图像传感器21输出同一场景的合并图像和色块图像,合并图像包括合并像素阵列,同一感光像素单元212a的多个感光像素2121合并输出作为一个合并像素,色块图像包括预定阵列排布的图像像素单元,图像像素单元包括多个原始像素,每个感光像素2121对应一个原始像素;识别模块12用于根据合并图像识别人脸区域;转换模块13用于利用第一插值算法将色块图像转换成房源图形,仿原图像包括阵列排布的仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素,转换模块13包括第一判断单元131、第二判断单元133、第一计算单元135、第二计算单元137和第三计算单元139,第一判断单元131用于判断关联像素是否位于人脸区域外;第二判断单元133用于在关联像素位于人脸区域外时判断当前像素的颜色与关联像素的颜色是否相同;第一计算单元135用于在当前像素的颜色与关联像素颜色相同时,将关联像素的像素值作为当前像素的像素值;第二计算单元137用于在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;第三计算单元139用于将合并图像通过第二插值算法转换成与仿原图像对应的还原图像,第二插值算法的复杂度小于第一插值算法;合并模块14用于合并仿原图像和还原图像以得到合并仿原图像。
也即是说,步骤S11可以由输出模块11实现,步骤S12可以由识别模块12实现, 步骤S13可以由转换模块13实现,步骤S131可以由第一判断单元131实现,步骤S133可以由第二判断单元133实现,步骤S135可以由第一计算单元135实现,步骤S137可以由第二计算单元137实现,步骤S139可以由第三计算单元139实现,步骤S14可以由合并模块14实现。
可以理解,本发明实施方式的控制方法采用第一插值算法处理色块图像的人脸区域外的图像,用第二插值算法处理合并图像。第二插值算法的复杂度包括时间复杂度和空间复杂度,相较于第一插值算法,第二插值算法的时间复杂度和空间复杂度均小于第一插值算法。如此,仅对部分的图像进行复杂度较大的第一插值算法进行处理,减少了所需处理的数据和处理所需的时间,提升了部分图像的质量,同时,人脸区域的图像清晰度相较于人脸区域外的图像的清晰度低,可以使用户有更好的拍摄体验。
请一并参阅图4至7,本发明实施方式的图像传感器21包括感光像素单元阵列212和设置在感光像素单元阵列212上的滤光片单元阵列211。
进一步地,感光像素单元阵列212包括多个感光像素单元212a,每个感光像素单元212a包括多个相邻的感光像素2121。每个感光像素2121包括一个感光器件21211和一个传输管21212,其中,感光器件21211可以是光电二极管,传输管21212可以是MOS晶体管。
滤光片单元阵列211包括多个滤光片单元211a,每个滤光片单元211a覆盖对应一个感光像素单元212a。
具体地,在某些示例中,滤光片单元阵列211包括拜耳阵列,也即是说,相邻的四个滤光片单元211a分别为一个红色滤光片单元、一个蓝色滤光片单元和两个绿色滤光片单元。
每一个感光像素单元212a对应同一颜色的滤光片211a,若一个感光像素单元212a中一共包括n个相邻的感光器件21211,那么一个滤光片单元211a覆盖一个感光像素单元212a中的n个感光器件21211,该滤光片单元211a可以是一体构造,也可以由n个独立的子滤光片组装连接在一起。
在某些实施方式中,每个感光像素单元212a包括四个相邻的感光像素2121,相邻两个感光像素2121共同构成一个感光像素子单元2120,感光像素子单元2120还包括一个源极跟随器21213和一个模数转换器21214。感光像素单元212a还包括一个加法器2122。其中,一个感光像素子单元2120中的每个传输管21212的一端电极被连接到对应感光器件21211的阴极电极,每个传输管21212的另一端被共同连接至源极跟随器21213的闸极电极,并通过源极跟随器21213源极电极连接至一个模数转换器21214。 其中,源极跟随器21213可以是MOS晶体管。两个感光像素子单元2120通过各自的源极跟随器21213及模数转换器21214连接至加法器2122。
也即是说,本发明实施方式的图像传感器21的一个感光像素单元212a中相邻的四个感光器件21211共用一个同颜色的滤光片单元211a,每个感光器件21211对应连接一个传输管21212,相邻两个感光器件21211共用一个源极跟随器21213和一个模数转换器21214,相邻四个感光器件21211共用一个加法器2122。
进一步地,相邻四个感光器件21211呈2*2阵列排布。其中,一个感光像素子单元2120中的两个感光器件21211可以处于同一列。
在成像时,当同一滤光片单元211a下覆盖的两个感光像素子单元2120或者说四个感光器件21211同时曝光时,可以对像素进行合并进而可输出合并图像。
具体地,感光器件21211用于将光照转换为电荷,且产生的电荷与光照强度成比例关系,传输管21212用于根据控制信号来控制电路的导通或断开。当电路导通时,源极跟随器21213用于将感光器件21211经光照产生的电荷信号转换为电压信号。模数转换器21214用于电压信号转换为数字信号。加法器2122用于将两路数字信号相加共同输出,以供与图像传感器21相连的图像处理模块处理。
请参阅图8,以16M的图像传感器21举例来说,本发明实施方式的图像传感器21可以将16M的感光像素合并成4M,或者说,输出合并图像,合并后,感光像素的大小相当于变成了原来大小的4倍,从而提升了感光像素的感光度。此外,由于图像传感器21中的噪声大部分都是随机噪声,对于合并之前的感光像素来说,有可能其中一个或两个像素中存在噪点,而在将四个感光像素合并成一个大的感光像素后,减小了噪点对该大像素的影响,也即是减弱了噪声,提高了信噪比。
但在感光像素大小变大的同时,由于像素值降低,合并图像的解析度也将降低。
在成像时,当同一滤光片单元211a覆盖的四个感光器件21211依次曝光时,经过图像处理可以输出色块图像。
具体地,感光器件21211用于将光照转换为电荷,且产生的电荷与光照的强度成比例关系,传输管21212用于根据控制信号来控制电路的导通或断开。当电路导通时,源极跟随器21213用于将感光器件21211经光照产生的电荷信号转换为电压信号。模数转换器21214用于将电压信号转换为数字信号以供与图像传感器21相连的图像处理模块处理。
请参阅图9,以16M的图像传感器21举例来说,本发明实施方式的图像传感器21还可以保持16M的感光像素输出,或者说输出色块图像,色块图像包括图像像素单元, 图像像素单元包括2*2阵列排布的原始像素,该原始像素的大小与感光像素大小相同,然而由于覆盖相邻四个感光器件21211的滤光片单元211a为同一颜色,也即是说,虽然四个感光器件21211分别曝光,但其覆盖的滤光片单元211a颜色相同,因此,输出的每个图像像素单元的相邻四个原始像素颜色相同,仍然无法提高图像的解析度。
本发明实施方式的控制方法,用于对输出的色块图像进行处理,以得到仿原图像。
可以理解,合并图像在输出时,四个相邻的同色的感光像素以合并像素输出,如此,合并图像中的四个相邻的合并像素仍可看作是典型的拜耳阵列,可以直接被图像处理模块接收进行处理以输出真彩图像。而色块图像在输出时每个感光像素分别输出,由于相邻四个感光像素颜色相同,因此,一个图像像素单元的四个相邻原始像素的颜色相同,是非典型的拜耳阵列。而图像处理模块无法对非典型拜耳阵列直接进行处理,也即是说,在图像传感器21采用统一图像处理模式时,为兼容两种模式的真彩图像输出即合并模式下的真彩图像输出及色块模式下的真彩图像输出,需将色块图像转换为仿原图像,或者说将非典型拜耳阵列的图像像素单元转换为典型的拜耳阵列的像素排布。
仿原图像包括呈拜耳阵列排布的仿原像素。仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素。
本发明实施方式的控制方法分别输出色块图像和合并图像,并在合并图像上识别出人脸区域。
基于合并图像与色块图像间的对应关系,对于一帧色块图像的人脸区域外的图像,先将该色块图像的人脸区域外的图像转换成拜耳图像阵列,再利用第一插值算法进行图像处理。具体地,请参阅图10,以图10为例,当前像素为R3’3’和R5’5’,对应的关联像素分别为R33和R55。
在获取当前像素R3’3’时,由于R3’3’与对应的关联像素R33的颜色相同,因此在转换时直接将R33的像素值作为R3’3’的像素值。
在获取当前像素R5’5’时,由于R5’5’与对应的关联像素B55的颜色不相同,显然不能直接将B55的像素值作为R5’5’的像素值,需要根据R5’5’的关联像素单元通过插值的方式计算得到。
需要说明的是,以上及下文中的像素值应当广义理解为该像素的颜色属性数值,例如色彩值。
关联像素单元包括多个,例如4个,颜色与当前像素相同且与当前像素相邻的图像像素单元中的原始像素。
需要说明的是,此处相邻应做广义理解,以图10为例,R5’5’对应的关联像素为B55,与B55所在的图像像素单元相邻的且与R5’5’颜色相同的关联像素单元所在的图像像素单元分别为R44、R74、R47、R77所在的图像像素单元,而并非在空间上距离B55所在的图像像素单元更远的其他的红色图像像素单元。其中,与B55在空间上距离最近的红色原始像素分别为R44、R74、R47和R77,也即是说,R5’5’的关联像素单元由R44、R74、R47和R77组成,R5’5’与R44、R74、R47和R77的颜色相同且相邻。
如此,针对不同情况的当前像素,采用不同方式的将原始像素转换为仿原像素,从而将色块图像转换为仿原图像,由于拍摄图像时,采用了特殊的拜耳阵列结构的滤光片,提高了图像信噪比,并且在图像处理过程中,通过第一插值算法对色块图像进行插值处理,提高了图像的分辨率及解析度。
请参阅图11,在某些实施方式中,步骤S137包括以下步骤:
S1371:计算关联像素各个方向上的渐变量;
S1372:计算关联像素各个方向上的权重;和
S1373:根据渐变量及权重计算当前像素的像素值。
请参阅图12,在某些实施方式中,第二计算单元137包括第一计算子单元1371、第二计算子单元1372和第三计算子单元1373。第一计算子单元1371用于计算关联像素各个方向上的渐变量;第二计算子单元1372用于计算关联像素各个方向上的权重;第三计算子单元1373用于根据渐变量及权重计算当前像素的像素值。
也即是说,步骤S1371可以由第一计算子单元1371实现,步骤S1372可以由第二计算子单元1372实现,步骤S1373可以由第三计算子单元1373实现。
具体地,第一插值算法是参考图像在不同方向上的能量渐变,将与当前像素对应的颜色相同且相邻的关联像素单元依据在不同方向上的渐变权重大小,通过线性插值的方式计算得到当前像素的像素值。其中,在能量变化量较小的方向上,参考比重较大,因此,在插值计算时的权重较大。
在某些示例中,为方便计算,仅考虑水平和垂直方向。
R5’5’由R44、R74、R47和R77插值得到,而在水平和垂直方向上并不存在颜色相同的原始像素,因此需根据关联像素单元计算在水平和垂直方向上该颜色的分量。其中,水平方向上的分量为R45和R75、垂直方向的分量为R54和R57可以分别通过R44、R74、R47和R77计算得到。
具体地,R45=R44*2/3+R47*1/3,R75=2/3*R74+1/3*R77,R54=2/3*R44+1/3*R74,R57=2/3*R47+1/3*R77。
然后,分别计算在水平和垂直方向的渐变量及权重,也即是说,根据该颜色在不同方向的渐变量,以确定在插值时不同方向的参考权重,在渐变量小的方向,权重较大,而在渐变量较大的方向,权重较小。其中,在水平方向的渐变量X1=|R45-R75|,在垂直方向上的渐变量X2=|R54-R57|,W1=X1/(X1+X2),W2=X2/(X1+X2)。
如此,根据上述可计算得到,R5’5’=(2/3*R45+1/3*R75)*W2+(2/3*R54+1/3*R57)*W1。可以理解,若X1大于X2,则W1大于W2,因此计算时水平方向的权重为W2,而垂直方向的权重为W1,反之亦反。
如此,可根据第一插值算法计算得到当前像素的像素值。依据上述对关联像素的处理方式,可将原始像素转换为呈典型拜耳阵列排布的仿原像素,也即是说,相邻的四个2*2阵列的仿原像素包括一个红色仿原像素,两个绿色仿原像素和一个蓝色仿原像素。
需要说明的是,第一插值算法包括但不限于本实施例中公开的在计算时仅考虑垂直和水平两个方向相同颜色的像素值的方式,例如还可以参考其他颜色的像素值。
请参阅图13,在某些实施方式中,在步骤S137前包括步骤:
S136a,对色块图像做白平衡补偿;
步骤S137后包括步骤:
S138a:对仿原图像做白平衡补偿还原。
请参阅图14,在某些实施方式中,转换模块13包括白平衡补偿单元136a和白平衡补偿还原单元138a。白平衡补偿单元136a用于对色块图像做白平衡补偿,白平衡补偿还原单元138a用于对仿原图像做白平衡补偿还原。
也即是说,步骤S136a可以由白平衡补偿单元136a实现,步骤S138a可以由白平衡补偿还原单元138a实现。
具体地,在一些示例中,在将色块图像转换为仿原图像的过程中,在插值时,红色和蓝色仿原像素往往不仅参考与其颜色相同的通道的原始像素的颜色,还会参考绿色通道的原始像素的颜色权重,因此,在插值前需要进行白平衡补偿,以在插值计算中排除白平衡的影响。为了不破坏色块图像的白平衡,因此,在插值之后需要将仿原图像进行白平衡补偿还原,还原时根据在补偿中红色、绿色及蓝色的增益值进行还原。
如此,可排除在插值过程中白平衡的影响,并且能够使得插值后得到的仿原图像保持色块图像的白平衡。
请再参阅图13,在某些实施方式中,步骤S137前包括步骤:
S136b:对色块图像做坏点补偿。
请再参阅图14,在某些实施方式中,转换模块13包括坏点补偿单元136b。
也即是说,步骤S136b可以由坏点补偿单元136b实现。
可以理解,受限于制造工艺,图像传感器21可能会存在坏点,坏点通常不随感光度变化而始终呈现同一颜色,坏点的存在将影响图像质量,因此,为保证插值的准确,不受坏点的影响,需要在插值前进行坏点补偿。
具体地,坏点补偿过程中,可以对原始像素进行检测,当检测到某一原始像素为坏点时,可根据其所在的图像像素单元的其他原始像的像素值进行坏点补偿。
如此,可排除坏点对插值处理的影响,提高图像质量。
请再参阅图13,在某些实施方式中,步骤S137前包括步骤:
S136c:对色块图像做串扰补偿。
请再参阅图14,在某些实施方式中,转换模块13包括串扰补偿单元136c。
也即是说,步骤S136c可以由坏点串扰单元136c实现。
具体的,一个感光像素单元中的四个感光像素覆盖同一颜色的滤光片,而感光像素之间可能存在感光度的差异,以至于以仿原图像转换输出的真彩图像中的纯色区域会出现固定型谱噪声,影响图像的质量。因此,需要对色块图像进行串扰补偿。
请再参阅图13,在某些实施方式中,步骤S137后包括步骤:
S138b:对仿原图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
请再参阅图14,在某些实施方式中,转换模块13还包括处理单元138b。
也即是说,步骤S138b可以由处理单元138b实现。
可以理解,将色块图像转换为仿原图像后,仿原像素排布为典型的拜耳阵列,可采用处理单元138b进行处理,处理过程中包括镜片阴影校正、去马赛克、降噪和边缘锐化处理,如此,处理后即可得到真彩图像输出给用户。
对于合并图像,请参阅图8,以图8为例,由于合并图像为非典型的拜耳阵列,图像传感器21将16M的感光像素合并成4M直接输出,因此,为便于后续的色块图像与合并图像的合成,需要先利用第二插值算法将合并图像进行拉伸放大以转换成大小与色块图像相同的还原图像。
请参阅图15,对色块图像人脸区域外的图像利用第一插值算法处理成为仿原图像,对合并图像利用第二插值算法处理成为还原图像后,将两幅图像进行合成以得到合并仿原图像。合并仿原图像中人脸区域外的图像具有较高的解析度。
请参阅图16,在某些实施方式中,本发明实施方式的电子装置100包括控制装置10、成像装置20和触摸屏30。
在某些实施方式中,电子装置100包括手机和平板电脑。
手机和平板电脑均带有摄像头即成像装置20,用户使用手机或平板电脑进行拍摄时,可以采用本发明实施方式的控制方法,以得到高解析度的图片。
需要说明的是,电子装置100也包括其他具有拍摄功能的电子设备。本发明实施方式的控制方法是电子装置100进行图像处理的指定处理模式之一。也即是说,用户利用电子装置100进行拍摄时,需要对电子装置100中包含的各种指定处理模式进行选择,当用户选择本发明实施方式的指定处理模式时,用户可以自主选择预定区域,电子装置100采用本发明实施方式的控制方法进行图像处理。
在某些实施方式中,成像装置20包括前置相机和后置相机。
可以理解,许多电子装置100包括前置相机和后置相机,前置相机和后置相机均可采用本发明实施方式的控制方法实现图像处理,以提升用户体验。
请参阅图17,本发明实施方式的电子装置100包括处理器40、存储器50、电路板60、电源电路70和壳体80。其中,电路板60安置在壳体80围成的空间内部,处理器40和存储器50设置在电路板60上;电源电路70用于为电子装置100的各个电路或器件供电;存储器50用于存储可执行程序代码;处理器40通过读取存储器50中存储的可执行代码来运行与可执行程序代码对应的程序以实现上述中本发明任一实施方式的控制方法。
例如,处理器40可以用于执行以下步骤:
S11:控制图像传感器21输出同一场景的合并图像和色块图像,合并图像包括合并像素阵列,同一感光像素单元212a的多个感光像素2121合并输出作为一个合并像素,色块图像包括预定阵列排布的图像像素单元,图像像素单元包括多个原始像素,每个感光像素对应一个原始像素,每个感光像素对应一个原始像素;
S12:根据合并图像识别人脸区域;
S13:利用第一插值算法将色块图像转换成仿原图像,仿原图像包括阵列排布的仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素,利用第一插值算法将色块图像转换成仿原图像的步骤包括以下步骤:
S131:判断关联像素是否位于人脸区域外;
S133:在关联像素位于人脸区域外时判断当前像素的颜色与关联像素的颜色是否相同;
S135:在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;和
S137:在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像 素单元的颜色与当前像素相同且与当前像素相邻;和
S139:将合并图像通过第二插值算法转换成与仿原图像对应的还原图像,第二插值算法的复杂度小于第一插值算法;和
S14:合并仿原图像及还原图像以得到合并仿原图像。
本发明实施方式的可读存储介质,具有存储于其中的指令。当电子装置100的处理器40执行所述指令时,电子装置100执行上述中本发明任一实施方式所述的控制方法。
例如,电子装置100可以执行以下步骤:
S11:控制图像传感器21输出同一场景的合并图像和色块图像,合并图像包括合并像素阵列,同一感光像素单元212a的多个感光像素2121合并输出作为一个合并像素,色块图像包括预定阵列排布的图像像素单元,图像像素单元包括多个原始像素,每个感光像素对应一个原始像素,每个感光像素对应一个原始像素;
S12:根据合并图像识别人脸区域;
S13:利用第一插值算法将色块图像转换成仿原图像,仿原图像包括阵列排布的仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素,利用第一插值算法将色块图像转换成仿原图像的步骤包括以下步骤:
S131:判断关联像素是否位于人脸区域外;
S133:在关联像素位于人脸区域外时判断当前像素的颜色与关联像素的颜色是否相同;
S135:在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;和
S137:在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;和
S139:将合并图像通过第二插值算法转换成与仿原图像对应的还原图像,第二插值算法的复杂度小于第一插值算法;和
S14:合并仿原图像及还原图像以得到合并仿原图像。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于执行特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的执行,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于执行逻辑功能的可执行指令的定序列表,可以具体执行在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来执行。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来执行。例如,如果用硬件来执行,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来执行:具有用于对数据信号执行逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解执行上述实施方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式执行,也可以采用软件功能模块的形式执行。所述集成的模块如果以软件功能模块的形式执行并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (21)

  1. 一种控制方法,用于控制电子装置,其特征在于,所述电子装置包括成像装置和触摸屏,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元阵列包括多个感光像素,所述控制方法包括以下步骤:
    控制所述图像传感器输出同一场景的合并图像和色块图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素对应一个所述原始像素;
    根据所述合并图像识别人脸区域;
    利用第一插值算法将所述色块图像转换成仿原图像,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述利用第一插值算法将所述色块图像转换成仿原图像的步骤包括以下步骤:
    判断所述关联像素是否位于所述人脸区域外;
    在所述关联像素位于所述人脸区域外时判断所述当前像素的颜色与所述关联像素的颜色是否相同;
    在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;和
    在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
    将所述合并图像通过第二插值算法转换成与所述仿原图像对应的还原图像,所述第二插值算法的复杂度小于所述第一插值算法;和
    合并所述仿原图像及所述还原图像以得到所述合并仿原图像。
  2. 根据权利要求1所述的控制方法,其特征在于,所述预定阵列包括拜耳阵列。
  3. 根据权利要求1所述的控制方法,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
  4. 根据权利要求1所述的控制方法,其特征在于,所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤包括以下步骤:
    计算所述关联像素各个方向上的渐变量;
    计算所述关联像素各个方向上的权重;和
    根据所述渐变量及所述权重计算所述当前像素的像素值。
  5. 根据权利要求所述的控制方法,其特征在于所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前包括以下步骤:
    对所述色块图像做白平衡补偿;
    所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤后包括以下步骤:
    对所述仿原图像做白平衡补偿还原。
  6. 根据权利要求1所述的控制方法,其特征在于,所述控制方法在所述根据所述关联像素的像素值通过第一插值算法计算所述当前像素的像素值的步骤前包括以下步骤:
    对所述色块图像做坏点补偿。
  7. 根据权利要求1所述的控制方法,其特征在于,所述控制方法在所述根据关联像素的像素值通过第一插值算法计算所述当前像素的像素值的步骤前包括如下步骤:
    对所述色块图像做串扰补偿。
  8. 根据权利要求1所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤包括如下步骤:
    对所述仿原图像进行镜片形状校正、去马赛克、降噪和边缘锐化处理。
  9. 一种控制装置,用于控制电子装置,其特征在于,所述电子装置包括成像装置和触摸屏,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元阵列包括多个感光像素,所述控制装置包括:
    输出模块,所述输出模块用于控制所述图像传感器输出同一场景的合并图像和色块图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素对应一个所述原始像素,每个所述感光像素对应一个所述原始像素;
    识别模块,所述识别模块用于根据所述合并图像识别人脸区域;
    转换模块,所述转换模块用于利用第一插值算法将所述色块图像转换成仿原图像,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述转换模块包括:
    第一判断单元,所述第一判断单元用于判断所述关联像素是否位于所述人脸区域外;
    第二判断单元,所述第二判断单元用于在所述关联像素位于所述人脸区域外时判断所述当前像素的颜色与所述关联像素的颜色是否相同;
    第一计算单元,所述第一计算单元用于在所述当前像素的颜色与所述关联像素的颜色 相同时,将所述关联像素的像素值作为所述当前像素的像素值;和
    第二计算单元,所述第二计算单元用于在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
    第三计算单元,所述第三计算单元用于将所述合并图像通过第二插值算法转换成与所述仿原图像对应的还原图像,所述第二插值算法的复杂度小于所述第一插值算法;和
    合并模块,所述合并模块用于合并所述仿原图像及所述还原图像以得到所述合并仿原图像。
  10. 根据权利要求9所述的控制装置,其特征在于,所述预定阵列包括拜耳阵列。
  11. 根据权利要求9所述的控制装置,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
  12. 根据权利要求9所述的控制装置,其特征在于,所述第二计算模块包括:
    第一计算子元,所述第一计算子单元用于计算所述关联像素各个方向上的渐变量;
    第二计算子单元,所述第二计算子单元用于计算所述关联像素各个方向上的权重;和
    第三计算子单元,所述第三计算自单元用于根据所述渐变量及所述权重计算所述当前像素的像素值。
  13. 根据权利要求9所述的控制装置,其特征在于,所述转换模块包括:
    白平衡补偿单元,所述白平衡补偿单元用于对所述色块图像做白平衡补偿;
    白平衡补偿还原单元,所述白平衡补偿还原单元用于对所述仿原图像做白平衡补偿还原。
  14. 根据权利要求9所述的控制装置,其特征在于,所述转换模块包括:
    坏点补偿单元,所述坏点补偿单元用于对所述色块图像做坏点补偿。
  15. 根据权利要求9所述的控制装置,其特征在于,所述转换模块包括:
    串扰补偿单元,所述串扰补偿单元用于对所述色块图像做串扰补偿。
  16. 根据权利要求9所述的控制装置,其特征在于,所述转换模块包括:
    处理单元,所述处理单元用于对所述仿原图像进行镜片形状校正、去马赛克、降噪和边缘锐化处理。
  17. 一种电子装置,其特征在于包括:
    成像装置;
    触摸屏;和
    权利要求9-16任意一项所述的控制装置。
  18. 根据权利要求17所述的电子装置,其特征在于,所述电子装置包括手机或平板电脑。
  19. 根据权利要求17所述的电子装置,其特征在于,所述成像装置包括前置相机或后置相机。
  20. 一种电子装置,包括壳体、处理器、存储器、电路板和电源电路,其特征在于,所述电路板安置在所述壳体围成的空间内部,所述处理器和所述存储器设置在所述电路板上;所述电源电路用于为所述电子装置的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行权利要求1至8中任意一项所述的控制方法。
  21. 一种计算机可读存储介质,具有存储于其中的指令。当电子装置的处理器执行所述指令时,所述电子装置执行权利要求1至8中任意一项所述的控制方法。
PCT/CN2017/081916 2016-11-29 2017-04-25 控制方法、控制装置、电子装置和计算机可读存储介质 WO2018098981A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611078876.3 2016-11-29
CN201611078876.3A CN106713790B (zh) 2016-11-29 2016-11-29 控制方法、控制装置及电子装置

Publications (1)

Publication Number Publication Date
WO2018098981A1 true WO2018098981A1 (zh) 2018-06-07

Family

ID=58935283

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/081916 WO2018098981A1 (zh) 2016-11-29 2017-04-25 控制方法、控制装置、电子装置和计算机可读存储介质

Country Status (4)

Country Link
US (1) US10380717B2 (zh)
EP (1) EP3327665B1 (zh)
CN (1) CN106713790B (zh)
WO (1) WO2018098981A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2541179B (en) * 2015-07-31 2019-10-30 Imagination Tech Ltd Denoising filter
US9681109B2 (en) 2015-08-20 2017-06-13 Qualcomm Incorporated Systems and methods for configurable demodulation
WO2020139493A1 (en) * 2018-12-28 2020-07-02 Qualcomm Incorporated Systems and methods for converting non-bayer pattern color filter array image data
CN106507068B (zh) 2016-11-29 2018-05-04 广东欧珀移动通信有限公司 图像处理方法及装置、控制方法及装置、成像及电子装置
CN106357967B (zh) * 2016-11-29 2018-01-19 广东欧珀移动通信有限公司 控制方法、控制装置和电子装置
CN107808361B (zh) * 2017-10-30 2021-08-10 努比亚技术有限公司 图像处理方法、移动终端及计算机可读存储介质
CN112785533B (zh) * 2019-11-07 2023-06-16 RealMe重庆移动通信有限公司 图像融合方法、图像融合装置、电子设备与存储介质
KR20210112042A (ko) * 2020-03-04 2021-09-14 에스케이하이닉스 주식회사 이미지 센싱 장치 및 그의 동작 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6829008B1 (en) * 1998-08-20 2004-12-07 Canon Kabushiki Kaisha Solid-state image sensing apparatus, control method therefor, image sensing apparatus, basic layout of photoelectric conversion cell, and storage medium
CN105578078A (zh) * 2015-12-18 2016-05-11 广东欧珀移动通信有限公司 图像传感器、成像装置、移动终端及成像方法
CN105592303A (zh) * 2015-12-18 2016-05-18 广东欧珀移动通信有限公司 成像方法、成像装置及电子装置
CN106357967A (zh) * 2016-11-29 2017-01-25 广东欧珀移动通信有限公司 控制方法、控制装置和电子装置
CN106506984A (zh) * 2016-11-29 2017-03-15 广东欧珀移动通信有限公司 图像处理方法及装置、控制方法及装置、成像及电子装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8452082B2 (en) * 2007-09-27 2013-05-28 Eastman Kodak Company Pattern conversion for interpolation
US7745779B2 (en) * 2008-02-08 2010-06-29 Aptina Imaging Corporation Color pixel arrays having common color filters for multiple adjacent pixels for use in CMOS imagers
CN101815157B (zh) 2009-02-24 2013-01-23 虹软(杭州)科技有限公司 图像及视频的放大方法与相关的图像处理装置
JP5741007B2 (ja) * 2011-01-21 2015-07-01 株式会社リコー 画像処理装置、画素補間方法およびプログラム
DE102011100350A1 (de) * 2011-05-03 2012-11-08 Conti Temic Microelectronic Gmbh Bildsensor mit einstellbarer Auflösung
JP2013066146A (ja) 2011-08-31 2013-04-11 Sony Corp 画像処理装置、および画像処理方法、並びにプログラム
US9479695B2 (en) 2014-07-31 2016-10-25 Apple Inc. Generating a high dynamic range image using a temporal filter
CN105578072A (zh) 2015-12-18 2016-05-11 广东欧珀移动通信有限公司 成像方法、成像装置及电子装置
CN105578076A (zh) * 2015-12-18 2016-05-11 广东欧珀移动通信有限公司 成像方法、成像装置及电子装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6829008B1 (en) * 1998-08-20 2004-12-07 Canon Kabushiki Kaisha Solid-state image sensing apparatus, control method therefor, image sensing apparatus, basic layout of photoelectric conversion cell, and storage medium
CN105578078A (zh) * 2015-12-18 2016-05-11 广东欧珀移动通信有限公司 图像传感器、成像装置、移动终端及成像方法
CN105592303A (zh) * 2015-12-18 2016-05-18 广东欧珀移动通信有限公司 成像方法、成像装置及电子装置
CN106357967A (zh) * 2016-11-29 2017-01-25 广东欧珀移动通信有限公司 控制方法、控制装置和电子装置
CN106506984A (zh) * 2016-11-29 2017-03-15 广东欧珀移动通信有限公司 图像处理方法及装置、控制方法及装置、成像及电子装置

Also Published As

Publication number Publication date
EP3327665A1 (en) 2018-05-30
CN106713790A (zh) 2017-05-24
EP3327665B1 (en) 2020-03-11
CN106713790B (zh) 2019-05-10
US20180150936A1 (en) 2018-05-31
US10380717B2 (en) 2019-08-13

Similar Documents

Publication Publication Date Title
WO2018098981A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
US10531019B2 (en) Image processing method and apparatus, and electronic device
WO2018098978A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2018098982A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
WO2018099009A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2018099008A1 (zh) 控制方法、控制装置及电子装置
US10432905B2 (en) Method and apparatus for obtaining high resolution image, and electronic device for same
US10339632B2 (en) Image processing method and apparatus, and electronic device
US10559070B2 (en) Image processing method and apparatus, and electronic device
US10264178B2 (en) Control method and apparatus, and electronic device
WO2018098983A1 (zh) 图像处理方法及装置、控制方法及装置、成像及电子装置
WO2017101451A1 (zh) 成像方法、成像装置及电子装置
US10249021B2 (en) Image processing method and apparatus, and electronic device
WO2018098977A1 (zh) 图像处理方法、图像处理装置、成像装置、制造方法和电子装置
US10290079B2 (en) Image processing method and apparatus, and electronic device
US10165205B2 (en) Image processing method and apparatus, and electronic device
WO2018099006A1 (zh) 控制方法、控制装置及电子装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17876208

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17876208

Country of ref document: EP

Kind code of ref document: A1