WO2018099011A1 - 图像处理方法、图像处理装置、成像装置及电子装置 - Google Patents

图像处理方法、图像处理装置、成像装置及电子装置 Download PDF

Info

Publication number
WO2018099011A1
WO2018099011A1 PCT/CN2017/085408 CN2017085408W WO2018099011A1 WO 2018099011 A1 WO2018099011 A1 WO 2018099011A1 CN 2017085408 W CN2017085408 W CN 2017085408W WO 2018099011 A1 WO2018099011 A1 WO 2018099011A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
unit
image processing
color
Prior art date
Application number
PCT/CN2017/085408
Other languages
English (en)
French (fr)
Inventor
唐城
Original Assignee
广东欧珀移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东欧珀移动通信有限公司 filed Critical 广东欧珀移动通信有限公司
Publication of WO2018099011A1 publication Critical patent/WO2018099011A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • G06T3/10
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4015Demosaicing, e.g. colour filter array [CFA], Bayer pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/77Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
    • H04N25/778Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising amplifiers shared between a plurality of pixels, i.e. at least one part of the amplifier must be on the sensor array itself
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2209/00Details of colour television systems
    • H04N2209/04Picture signal generators
    • H04N2209/041Picture signal generators using solid-state devices
    • H04N2209/042Picture signal generators using solid-state devices having a single pick-up sensor
    • H04N2209/045Picture signal generators using solid-state devices having a single pick-up sensor using mosaic colour filter
    • H04N2209/046Colour interpolation to calculate the missing colour values

Definitions

  • the present invention relates to image processing technologies, and in particular, to an image processing method, an image processing device, an imaging device, and an electronic device.
  • An existing image sensor includes an array of photosensitive pixel units and an array of filter cells disposed on the array of photosensitive pixel units, each array of filter cells covering a corresponding one of the photosensitive pixel units, each photosensitive pixel unit including a plurality of photosensitive pixels Pixel.
  • Combining the images includes merging the pixel arrays, and the plurality of photosensitive pixels of the same photosensitive pixel unit are combined to output as corresponding merged pixels. In this way, the signal-to-noise ratio of the merged image can be improved, however, the resolution of the merged image is lowered.
  • the image sensor may also be controlled to output a high-pixel patch image
  • the patch image includes an original pixel array, and each photosensitive pixel corresponds to one original pixel.
  • the resolution of the patch image cannot be improved. Therefore, it is necessary to convert the high-pixel patch image into a high-pixel pseudo-image by interpolation calculation, and the pseudo-image may include a pseudo-origin pixel arranged in a Bayer array.
  • the original image can be converted into an original true color image by image processing and saved. Interpolation calculation can improve the sharpness of true color images, but it is resource-intensive and time-consuming, resulting in longer shooting time.
  • high-definition processing of certain parts of true color images, such as human faces will Reduce the user experience.
  • Embodiments of the present invention provide an image processing method, an image processing device, an imaging device, and an electronic device.
  • An image processing method for processing a patch image output by an image sensor to output a pseudo original image, the image sensor comprising a photosensitive pixel unit array and a filter unit array disposed on the photosensitive pixel unit array
  • Each of the filter cell arrays covers a corresponding one of the photosensitive pixel units
  • each of the photosensitive pixel units includes a plurality of photosensitive pixels
  • the patch image includes image pixel units arranged in a predetermined array
  • the image The pixel unit includes a plurality of original pixels, each of the photosensitive pixels corresponding to one of the original pixels
  • the image processing method includes the following steps:
  • the step of converting the patch image into a pseudo original image includes the following steps:
  • the pixel value of the associated pixel is used as the pixel value of the current pixel
  • the pixel value of the current pixel is calculated by a second interpolation algorithm, and the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm.
  • the predetermined array comprises a Bayer array.
  • the image pixel unit comprises the original pixel of a 2*2 array.
  • the step of calculating a pixel value of the current pixel by using a first interpolation algorithm according to a pixel value of the associated pixel unit includes the following steps:
  • the image processing method includes the following steps before the step of calculating the pixel value of the current pixel by using a first interpolation algorithm according to a pixel value of the associated pixel unit:
  • the image processing method includes the following steps after the step of calculating the pixel value of the current pixel by using a first interpolation algorithm according to a pixel value of the associated pixel unit:
  • the image processing method includes the following steps before the step of calculating the pixel value of the current pixel by using a first interpolation algorithm according to a pixel value of the associated pixel unit:
  • the image processing method includes the following steps before the step of calculating the pixel value of the current pixel by using a first interpolation algorithm according to a pixel value of the associated pixel unit:
  • Crosstalk compensation is performed on the patch image.
  • the image processing method includes the following steps:
  • An image processing apparatus configured to process a patch image output by an image sensor to output a pseudo original image
  • the image sensor comprising a photosensitive pixel unit array and a filter unit array disposed on the photosensitive pixel unit array
  • Each of the filter cell arrays covers a corresponding one of the photosensitive pixel units
  • each of the photosensitive pixel units includes a plurality of photosensitive pixels
  • the patch image includes image pixel units arranged in a predetermined array
  • the image The pixel unit includes a plurality of original pixels
  • the image processing apparatus includes:
  • An identification module configured to identify a face region according to the patch image
  • the conversion module is configured to convert the color patch image into a pseudo original image, where the pseudo original image includes an array of original pixels, the original pixel includes a current pixel, and the original pixel includes The associated pixel corresponding to the current pixel;
  • the conversion module includes:
  • a first determining unit configured to determine whether the associated pixel is located outside the face area
  • a second determining unit configured to determine, when the associated pixel is located outside the face region, whether the color of the current pixel is the same as the color of the associated pixel;
  • a first calculating unit configured to: when a color of the current pixel is the same as a color of the associated pixel, use a pixel value of the associated pixel as a pixel value of the current pixel;
  • a second calculating unit configured to calculate a pixel value of the current pixel by using a first interpolation algorithm according to a pixel value of the associated pixel unit when a color of the current pixel is different from a color of the associated pixel
  • the image pixel unit includes the associated pixel unit, the color of the associated pixel unit being the same as the current pixel and adjacent to the current pixel;
  • a third calculating unit configured to calculate a pixel value of the current pixel by using a second interpolation algorithm when the associated pixel is located in the face region, where the complexity of the second interpolation algorithm is less than The first interpolation algorithm.
  • the predetermined array comprises a Bayer array.
  • the image pixel unit comprises the original pixel of a 2*2 array.
  • the second computing unit comprises:
  • a first calculating subunit wherein the first calculating subunit is configured to calculate a gradation amount in each direction of the associated pixel
  • the second calculation subunit is configured to calculate a weight in each direction of the associated pixel
  • a third calculating subunit configured to calculate the current current according to the gradation amount and the weight The pixel value of the pixel.
  • the conversion module comprises:
  • a white balance compensation unit configured to perform white balance compensation on the color patch image
  • the white balance compensation and reduction unit is configured to perform white balance compensation and restoration on the original image.
  • the conversion module comprises:
  • a dead point compensation unit for performing dead center compensation on the color patch image.
  • the conversion module comprises:
  • a crosstalk compensation unit for performing crosstalk compensation on the patch image.
  • the conversion module comprises:
  • a processing unit configured to perform lens shape correction, demosaicing, noise reduction, and edge sharpening processing on the pseudo original image.
  • An imaging apparatus includes the above-described image processing apparatus and an image sensor for generating the patch image.
  • An electronic device includes the above-described imaging device touch screen.
  • the electronic device comprises a cell phone or a tablet.
  • the imaging device comprises a front camera or a rear camera.
  • An electronic device includes a housing, a processor, a memory, a circuit board, and a power supply circuit, wherein the circuit board is disposed inside a space enclosed by the housing, the processor and the a memory is disposed on the circuit board; the power circuit is configured to supply power to respective circuits or devices of the electronic device; the memory is configured to store executable program code; and the processor reads the memory by reading the memory
  • the executable program code runs a program corresponding to the executable program code for executing the image processing method described above.
  • the image processing method, the image processing device, the imaging device, and the electronic device recognize the face region according to the patch image, and use the first interpolation algorithm to process the image outside the face region to improve the face region.
  • the resolution and resolution of the image are processed by the second interpolation algorithm with a complexity smaller than the first interpolation algorithm for the image in the face region, which reduces the required signal to noise ratio, resolution and resolution while reducing the image signal-to-noise ratio, resolution and resolution.
  • the processed data and processing time enhances the user experience.
  • FIG. 1 is a schematic flow chart of an image processing method according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing an image processing apparatus according to an embodiment of the present invention.
  • FIG. 3 is a block diagram of an image sensor according to an embodiment of the present invention.
  • FIG. 4 is a circuit diagram of an image sensor according to an embodiment of the present invention.
  • Figure 5 is a schematic view of a filter unit according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an image sensor according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a merged image state according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram showing a state of a patch image according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram showing a state of a control method according to an embodiment of the present invention.
  • FIG. 10 is a schematic flow chart of a control method according to an embodiment of the present invention.
  • FIG. 11 is a block diagram of a second computing module of some embodiments of the present invention.
  • FIG. 12 is a schematic flow chart of an image processing method according to some embodiments of the present invention.
  • FIG. 13 is a block diagram of an image processing apparatus according to some embodiments of the present invention.
  • FIG. 14 is a schematic diagram of an image pixel unit of a patch image of some embodiments of the present invention.
  • 15 is a schematic flow chart of an image processing method according to some embodiments of the present invention.
  • 16 is a block diagram of an image processing apparatus according to some embodiments of the present invention.
  • 17 is a schematic diagram showing the state of an image processing method according to some embodiments of the present invention.
  • FIG. 18 is a schematic block diagram of an image forming apparatus according to an embodiment of the present invention.
  • FIG. 19 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 20 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
  • Image processing apparatus 100 identification module 120, conversion module 140, first determination unit 141, second determination unit 142, first calculation unit 143, second calculation unit 144, first calculation subunit 1441, second calculation subunit 1442 a third calculation subunit 1443, a third calculation unit 145, a white balance compensation unit 146, a white balance compensation and restoration unit 147, a dead point compensation unit 148, a crosstalk compensation unit 149, and a processing unit 150;
  • Image sensor 200 photosensitive pixel unit array 210, photosensitive pixel unit 210a, photosensitive pixel 212, photosensitive pixel subunit 2120, photosensitive device 2121, transmission tube 2122, source follower 2123, analog to digital converter 2124, adder 213, filter a light sheet unit array 220, a filter unit 220a;
  • a housing 300 a processor 400, a memory 500, a circuit board 600, a power supply circuit 700;
  • Imaging device 1000
  • first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include one or more of the described features either explicitly or implicitly.
  • the meaning of "a plurality" is two or more unless specifically defined otherwise.
  • the terms “installation”, “connected”, and “connected” should be understood broadly, and may be a fixed connection, for example, or They are detachable or integrally connected; they can be mechanically connected, they can be electrically connected or can communicate with each other; they can be connected directly or indirectly through an intermediate medium, which can be internal or two components of two components. Interaction relationship.
  • an intermediate medium which can be internal or two components of two components. Interaction relationship.
  • the "on" or “below” of the second feature may include direct contact of the first and second features, and may also include the first sum, unless otherwise specifically defined and defined.
  • the second feature is not in direct contact but through additional features between them.
  • the first feature “above”, “above” and “above” the second feature includes the first feature directly above and above the second feature, or merely indicating that the first feature level is higher than the second feature.
  • the first feature “below”, “below” and “below” the second feature includes the first feature directly below and below the second feature, or merely the first feature level being less than the second feature.
  • an image processing method is for processing a patch image output by the image sensor 200 to output a pseudo original image.
  • the image sensor 200 includes a photosensitive pixel unit array 210 and a filter unit array 220 disposed on the photosensitive pixel unit array 210. Each of the filter unit arrays 220 covers a corresponding one of the photosensitive pixel units 210a.
  • Each of the photosensitive pixel units 210a includes a plurality of photosensitive pixels 212.
  • the patch image includes image pixel units arranged in a predetermined array.
  • the image pixel unit includes a plurality of original pixels. Each photosensitive pixel 212 corresponds to one original pixel.
  • the image processing method includes the following steps:
  • Step S20 Identify a face region according to the patch image
  • Step S40 Converting the patch image into a pseudo original image, the pseudo original image includes the original pixel arranged in the array, the original pixel includes the current pixel, and the original pixel includes the associated pixel corresponding to the current pixel, and the step S40 includes the following steps:
  • Step S41 determining whether the associated pixel is located outside the face area
  • Step S42 When the associated pixel is located outside the face region, determine whether the color of the current pixel is the same as the color of the associated pixel;
  • Step S43 when the color of the current pixel is the same as the color of the associated pixel, the pixel value of the associated pixel is taken as the pixel value of the current pixel;
  • Step S44 When the color of the current pixel is different from the color of the associated pixel, the pixel value of the current pixel is calculated according to the pixel value of the associated pixel unit by using a first interpolation algorithm, where the image pixel unit includes the associated pixel unit, and the color of the associated pixel unit is current.
  • the pixels are the same and adjacent to the current pixel;
  • Step S45 When the associated pixel is located in the face region, the pixel value of the current pixel is calculated by the second interpolation algorithm, and the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm.
  • an image processing apparatus 100 is configured to process a patch image output by the image sensor 200 to output a pseudo original image.
  • the image sensor 200 includes a photosensitive pixel unit array 210 and a filter unit array 220 disposed on the photosensitive pixel unit array 210. Each of the filter unit arrays 220 covers a corresponding one of the photosensitive pixel units 210a.
  • Each of the photosensitive pixel units 210a includes a plurality of photosensitive pixels 212.
  • the patch image includes image pixel units arranged in a predetermined array.
  • the image pixel unit includes a plurality of original pixels. Each photosensitive pixel 212 corresponds to one original pixel.
  • the image processing apparatus 100 includes an identification module 120 and a conversion module 140.
  • the conversion module 140 includes a first determination unit 141, a second determination unit 142, a first calculation unit 143, a second calculation unit 144, and a third calculation unit 145.
  • step S20 can be implemented by the identification module 120
  • step S40 can be implemented by the conversion module 140
  • step S41 can be implemented by the first determination unit 141
  • step S42 can be implemented by the second determination unit 142
  • step S43 can be performed by the first calculation unit.
  • step S44 can be implemented by the second computing unit 144
  • step S45 can be implemented by the third computing unit 145.
  • the identification module 120 is configured to recognize the face region based on the patch image.
  • the conversion module 140 is configured to convert the patch image into a pseudo original image.
  • the original image includes the original pixels arranged in the array.
  • the original pixel includes the current pixel.
  • the original pixel includes an associated pixel corresponding to the current pixel.
  • the first determining unit 141 is configured to determine whether the associated pixel is located outside the face region.
  • the second determining unit 142 is configured to determine whether the color of the current pixel and the color of the associated pixel are the same when the associated pixel is located outside the face region.
  • the first calculating unit 143 is configured to use the pixel value of the associated pixel as the pixel value of the current pixel when the color of the current pixel is the same as the color of the associated pixel.
  • the second calculating unit 144 is configured to calculate a pixel value of the current pixel by using a first interpolation algorithm according to the pixel value of the associated pixel unit when the color of the current pixel is different from the color of the associated pixel.
  • the image pixel unit includes an associated pixel unit.
  • the associated pixel unit has the same color as the current pixel and is adjacent to the current pixel.
  • the third calculating unit 145 is configured to calculate a pixel value of the current pixel by using a second interpolation algorithm when the associated pixel is located in the face region. The complexity of the second interpolation algorithm is less than the first interpolation algorithm.
  • the image processing method and image processing apparatus 100 recognizes a face region according to a patch image, and processes an image outside the face region by using a first interpolation algorithm to improve resolution and resolution of an image outside the face region. Degree, the image in the face region is processed by the second interpolation algorithm with less complexity than the first interpolation algorithm, which reduces the data and processing time of the required processing while improving the image signal-to-noise ratio, resolution and resolution. , improved user experience.
  • the second interpolation algorithm has less time complexity and spatial complexity than the first interpolation algorithm.
  • the complexity of the algorithm includes time complexity and space complexity.
  • the time complexity is used to measure the time it takes for the algorithm.
  • the space complexity is used to measure the storage space that the algorithm needs. The small time complexity indicates that the algorithm takes less time, and the small space complexity indicates that the algorithm requires less storage space. Therefore, the second interpolation algorithm is beneficial to improve the operation speed, making the photographing process smoother and improving the user experience.
  • the process of step S20 may be to send a patch image to a particular library of face recognition algorithms to detect the presence or absence of a face.
  • the detecting method may be: extracting feature data of the patch image, and then performing search matching with the feature template of the face stored in the database, and setting a threshold. When the similarity between the feature data and the feature template exceeds the threshold, determining In order to have a face, and the range coordinates of the face are recorded and transferred to the conversion module 140, the subsequent image processing is performed by the conversion module 140 in a sub-region.
  • the image sensor 200 of the embodiment of the present invention includes a photosensitive pixel unit array 210 and a filter unit array 220 disposed on the photosensitive pixel unit array 210.
  • the photosensitive pixel unit array 210 includes a plurality of photosensitive pixel units 210a, each of which includes a plurality of adjacent photosensitive pixels 212.
  • Each of the photosensitive pixels 212 includes a photosensitive device 2121 and a transfer tube 2122, wherein the photosensitive device 2121 can be a photodiode, and the transfer tube 2122 can be a MOS transistor.
  • the filter unit array 220 includes a plurality of filter units 220a, each of which covers a corresponding one of the photosensitive pixel units 210a.
  • the filter cell array 220 includes a Bayer array, that is, the adjacent four filter cells 220a are respectively a red filter unit and a blue filter unit. And two green filter units.
  • Each of the photosensitive pixel units 210a corresponds to the filter unit 220a of the same color. If one photosensitive pixel unit 210a includes a total of n adjacent photosensitive devices 2121, one filter unit 220a covers n in one photosensitive pixel unit 210a.
  • the photosensitive device 2121, the filter unit 220a may be of an integral structure, or may be assembled and connected by n independent sub-filters.
  • each of the photosensitive pixel units 210a includes four adjacent photosensitive pixels 212, and the adjacent two photosensitive pixels 212 collectively constitute one photosensitive pixel subunit 2120, and the photosensitive pixel subunit 2120 further includes a source follower.
  • the photosensitive pixel unit 210a further includes an adder 213. Wherein one end electrode of each of the transfer tubes 2122 of one photosensitive pixel subunit 2120 is connected to the cathode electrode of the corresponding photosensitive device 2121, and the other end of each transfer tube 2122 is commonly connected to the gate electrode of the source follower 2123. And connected to an analog to digital converter 2124 through the source follower 2123 source electrode.
  • the source follower 2123 may be a MOS transistor.
  • the two photosensitive pixel subunits 2120 are connected to the adder 213 through respective source followers 2123 and analog to digital converters 2124.
  • the adjacent four photosensitive devices 2121 of one photosensitive pixel unit 210a of the image sensor 200 of the embodiment of the present invention share a filter unit 220a of the same color, and each photosensitive device 2121 is connected to a transmission tube 2122.
  • the adjacent two photosensitive devices 2121 share a source follower 2123 and an analog to digital converter 2124, and the adjacent four photosensitive devices 2121 share an adder 213.
  • the adjacent four photosensitive devices 2121 are arranged in a 2*2 array.
  • the two photosensitive devices 2121 in one photosensitive pixel subunit 2120 may be in the same column.
  • the pixels may be combined to output a combined image.
  • the photosensitive device 2121 is used to convert light into electric charge, and the generated electric charge is proportional to the light intensity, and the transfer tube 2122 is used to control the conduction or disconnection of the circuit according to the control signal.
  • the source follower 2123 is used to convert the charge signal generated by the photosensitive device 2121 into a voltage signal.
  • Analog to digital converter 2124 is used to convert the voltage signal into a digital signal.
  • the adder 213 is for adding two digital signals together for output.
  • the image sensor 200 of the embodiment of the present invention can merge the 16M photosensitive pixels 212 into 4M, or output a combined image.
  • the size of the photosensitive pixels 212 is equivalent. It is 4 times the original size, thereby increasing the sensitivity of the photosensitive pixel 212.
  • the noise in the image sensor 200 is mostly random noise, for the photosensitive pixels 212 before the combination, it is possible that noise exists in one or two pixels, and the four photosensitive pixels 212 are combined into one. After the large photosensitive pixel 212, the influence of the noise on the large pixel is reduced, that is, the noise is attenuated, and the signal-to-noise ratio is improved.
  • the resolution of the combined image will also decrease as the pixel value decreases.
  • the patch image can be output through image processing.
  • the photosensitive device 2121 is used to convert light into electric charge, and the generated electric charge is proportional to the light intensity, and the transfer tube 2122 is used to control the conduction or disconnection of the circuit according to the control signal.
  • the source follower 2123 is used to convert the charge signal generated by the photosensitive device 2121 into a voltage signal.
  • Analog to digital converter 2124 is used to convert the voltage signal into a digital signal for processing by image processing device 100 coupled to image sensor 200.
  • the image sensor 200 of the embodiment of the present invention can also hold the 16M photosensitive pixel 212 output, or output the patch image, and the patch image includes the image pixel unit, the image pixel.
  • the unit includes original pixels arranged in a 2*2 array, the size of the original pixels being the same as the size of the photosensitive pixels 212, but since the filter units 220a covering the adjacent four photosensitive devices 2121 are of the same color, that is, although The four photosensitive devices 2121 are respectively exposed, but the filter units 220a covered by the same color are the same. Therefore, the adjacent four original pixels of each image pixel unit output are the same color, and the resolution of the image cannot be improved.
  • the image processing method of the embodiment of the present invention can be used to process the output patch image to obtain a pseudo original image.
  • Processing device 100 receives processing to output a merged true color image.
  • the color patch image is outputted separately for each of the photosensitive pixels 212. Since the adjacent four photosensitive pixels 212 have the same color, the four adjacent original pixels of one image pixel unit have the same color and are atypical Bayer arrays.
  • the image processing apparatus 100 cannot directly process the atypical Bayer array, that is, when the image sensor 200 adopts the same image processing apparatus 100, it is compatible with the two modes of true color image output, that is, the merged true color in the merge mode.
  • the output of the original color image in the image output and color block mode needs to convert the color block image into a pseudo original image, or convert the image pixel unit of the atypical Bayer array into a pixel arrangement of a typical Bayer array.
  • the original image includes imitation original pixels arranged in a Bayer array.
  • the pseudo original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel.
  • the current pixels are R3'3' and R5'5', and the corresponding associated pixels are R33 and B55, respectively.
  • the pixel values above and below should be broadly understood as the color attribute values of the pixel, such as color values.
  • the associated pixel unit includes a plurality of, for example, four, original pixels in the image pixel unit that are the same color as the current pixel and are adjacent to the current pixel.
  • the associated pixel corresponding to R5'5' is B55, which is adjacent to the image pixel unit where B55 is located and has the same color as R5'5'.
  • the image pixel units in which the associated pixel unit is located are image pixel units in which R44, R74, R47, and R77 are located, and are not other red image pixel units that are spatially farther from the image pixel unit in which B55 is located.
  • the red original pixels closest to B55 are R44, R74, R47 and R77, respectively, that is, the associated pixel unit of R55' is composed of R44, R74, R47 and R77, R5'5' and R44. , R74, R47 and R77 are the same color and adjacent.
  • the original pixel is converted into the original pixel in different ways, thereby converting the color block image into a pseudo original image, and a special Bayer array structure filter is adopted when the image is captured.
  • the image signal-to-noise ratio is improved, and in the image processing process, the color block image is interpolated by the first interpolation algorithm, thereby improving the resolution and resolution of the image.
  • step S44 includes the following steps:
  • Step S441 Calculate the amount of gradation in each direction of the associated pixel unit
  • Step S442 Calculating weights in the respective directions of the associated pixel unit.
  • Step S443 Calculate the pixel value of the current pixel according to the amount of the gradient and the weight.
  • the second computing unit 144 includes a first computing sub-unit 1441 , a second computing sub-unit 1442 , and a third computing sub-unit 1443 .
  • Step S441 can be implemented by the first calculation sub-unit 1441
  • step S442 can be implemented by the second calculation sub-unit 1442
  • step S443 can be implemented by the third calculation sub-unit 1443.
  • the first calculation sub-unit 1441 is used to calculate the amount of gradation in each direction of the associated pixel unit
  • the second calculation sub-unit 1442 is used to calculate the weights in the respective directions of the associated pixel unit
  • the third calculation sub-unit 1443 is used to The quantity and weight calculate the pixel value of the current pixel.
  • the first interpolation algorithm is an energy gradation of the reference image in different directions, and the color corresponding to the current pixel is the same and the adjacent associated pixel unit is calculated by linear interpolation according to the gradation weight in different directions.
  • the pixel value of the current pixel in the direction in which the amount of change in energy is small, the reference specific gravity is large, and therefore, the weight at the time of interpolation calculation is large.
  • R5'5' is interpolated from R44, R74, R47 and R77, and there are no original pixels of the same color in the horizontal and vertical directions, so the components of the color in the horizontal and vertical directions are first calculated from the associated pixel unit. among them, The components in the horizontal direction are R45 and R75, and the components in the vertical direction are R54 and R57 which can be calculated by R44, R74, R47 and R77, respectively.
  • R45 R44*2/3+R47*1/3
  • R75 2/3*R74+1/3*R77
  • R54 2/3*R44+1/3*R74
  • R57 2/3 *R47+1/3*R77.
  • the amount of gradation and the weight in the horizontal and vertical directions are respectively calculated, that is, the gradation amount in different directions according to the color is determined to determine the reference weights in different directions at the time of interpolation, and the weight is smaller in the direction of the gradation amount. Large, and in the direction of larger gradient, the weight is smaller.
  • the gradient amount X1
  • the gradient amount X2
  • W1 X1/(X1+X2)
  • W2 X2/(X1+X2) .
  • R5'5' (2/3*R45+1/3*R75)*W2+(2/3*R54+1/3*R57)*W1. It can be understood that if X1 is greater than X2, W1 is greater than W2, so the weight in the horizontal direction is W2 when calculating, and the weight in the vertical direction is W1, and vice versa.
  • the pixel value of the current pixel can be calculated according to the first interpolation algorithm.
  • the original pixels can be converted into the original pixels arranged in a typical Bayer array, that is, the adjacent original pixels of the four 2*2 arrays include one red original pixel. , two green imitation original pixels and one blue imitation original pixel.
  • the first interpolation algorithm includes, but is not limited to, a manner in which only pixel values of the same color in both the vertical and horizontal directions are considered in the calculation, and for example, reference may also be made to pixel values of other colors.
  • step S44 includes steps:
  • Step S46 performing white balance compensation on the patch image
  • Step S44 includes steps:
  • Step S47 Perform white balance compensation and restoration on the original image.
  • the conversion module 140 includes a white balance compensation unit 146 and a white balance compensation reduction unit 147.
  • Step S46 can be implemented by the white balance compensation unit 146
  • step S47 can be implemented by the white balance compensation reduction unit 147.
  • the white balance compensation unit 146 is configured to perform white balance compensation on the color patch image
  • the white balance compensation and restoration unit 147 is configured to perform white balance compensation and restoration on the original image.
  • the red and blue pseudo-pixels in the process of converting the patch image into the original image, in the first interpolation algorithm, often refer not only to the color of the original pixel of the channel whose color is the same, The color weight of the original pixels of the green channel is also referenced, so white balance compensation is required before the first interpolation algorithm to exclude the effects of white balance in the interpolation calculation. In order not to destroy the white balance of the patch image, it is necessary to perform white balance compensation reduction after the interpolation, and restore according to the gain values of red, green and blue in the compensation.
  • step S44 includes steps:
  • Step S48 Performing a dead pixel compensation on the patch image.
  • the conversion module 140 includes a dead point compensation unit 148.
  • Step S48 can be implemented by the dead point compensation unit 148.
  • the dead point compensation unit 148 is used to perform dead point compensation on the patch image.
  • the image sensor 200 may have a dead pixel.
  • the bad point usually does not always present the same color as the sensitivity changes, and the presence of the dead pixel will affect the image quality. Therefore, in order to ensure accurate interpolation, For the influence of the dead point, it is necessary to perform the dead point compensation before the first interpolation algorithm.
  • the original pixel may be detected.
  • the pixel compensation may be performed according to the pixel value of the other original image of the image pixel unit in which it is located.
  • step S44 includes steps:
  • Step S49 Perform crosstalk compensation on the patch image.
  • the conversion includes a crosstalk compensation unit 149.
  • Step S49 can be implemented by the crosstalk compensation unit 149.
  • the crosstalk compensation unit 149 is configured to perform crosstalk compensation on the patch image.
  • the four photosensitive pixels 212 in one photosensitive pixel unit 210a cover the filters of the same color, and there may be a difference in sensitivity between the photosensitive pixels 212, so that the original color image is converted into a true color image.
  • Fixed-spectrum noise occurs in the solid-colored area, affecting the quality of the image. Therefore, it is necessary to perform crosstalk compensation on the patch image.
  • the predetermined light environment may include, for example, an LED homogenizing plate, a color temperature of about 5000 K, and a brightness of about 1000 lux.
  • the imaging parameters may include a gain value, a shutter value, and a lens position. After the relevant parameters are set, the crosstalk compensation parameters are acquired.
  • a plurality of color patch images are acquired with the set imaging parameters in the set light environment, and merged into one patch image, thereby reducing the noise influence based on the single patch image as a basis for calibration.
  • the crosstalk compensation is aimed at calibrating the photosensitive pixels 212 with different sensitivity to the same level by the compensation.
  • a patch image is first acquired with the same light environment and imaging parameters, and the patch image is crosstalk compensated according to the calculated compensation parameter, and the compensated Gr'_avg, Gr'1/Gr'_avg is calculated. , Gr'2/Gr'_avg, Gr'3/Gr'_avg and Gr'4/Gr'_avg. According to the calculation result, it is judged whether the compensation parameter is accurate, and the judgment can be considered according to the macroscopic and microscopic perspectives.
  • Microscopic means that a certain original pixel still has a large deviation after compensation, and is easily perceived by the user after imaging, while the macroscopic view is from a global angle, that is, when the total number of original pixels still having deviation after compensation is large, Even if the deviation of each original pixel is small, it is still perceived by the user as a whole. Therefore, it should be set to set a proportional threshold for the micro, and set a proportional threshold and a quantity threshold for the macro. In this way, the set crosstalk compensation parameters can be verified to ensure correct compensation parameters to reduce the impact of crosstalk on image quality.
  • step S44 the method further includes the following steps:
  • Step S50 performing lens shading correction, demosaicing, noise reduction and edge sharpening processing on the original image.
  • the conversion module 140 includes a processing unit 150.
  • Step S50 may be implemented by the processing unit 150, or the processing unit 150 may be configured to perform lens shading correction, demosaicing, noise reduction, and edge sharpening processing on the pseudo original image.
  • the original pixel arrangement is a typical Bayer array, which can be processed by the processing module, including lens shadow correction, demosaicing, noise reduction and edge sharpening processing. In this way, the true color image can be output to the user after processing.
  • the second interpolation algorithm is used for image processing.
  • the interpolation process of the second interpolation algorithm is: taking the average value of the pixel values of all the original pixels in each image pixel unit in the face region, and then determining whether the color of the current pixel and the associated pixel are the same, and the current pixel and the associated pixel color. When the values are the same, the pixel value of the associated pixel is taken as the pixel value of the current pixel. When the current pixel and the associated pixel color are different, the pixel value of the original pixel in the image pixel unit with the same color as the current pixel value is taken as the current The pixel value of the pixel.
  • the pixel values of R11, R12, R21, and R22 are all Ravg, and the pixel values of Gr31, Gr32, Gr41, and Gr42 are all Gravg, and the pixel values of Gb13, Gb14, Gb23, and Gb24 are all Gbavg, B33, B34, and B43.
  • the pixel value of B44 is Bavg.
  • the associated pixel corresponding to the current pixel B22 is R22. Since the color of the current pixel B22 is different from the color of the associated pixel R22, the pixel value of the current pixel B22 should be the corresponding blue filter of the nearest neighbor.
  • the pixel value is the value of any Bavg of B33, B34, B43, B44.
  • other colors are also calculated using a second interpolation algorithm to obtain pixel values for individual pixels.
  • the complexity required to convert the atypical Bayer array into a typical Bayer array is small, and the second interpolation algorithm can also improve the resolution of the original image, but the image is more original than the original image.
  • First interpolation The imitation effect of the law is slightly worse. Therefore, the first interpolation algorithm is used to process the image outside the face region, and the second interpolation algorithm is used to process the image in the face region, thereby improving the resolution and the original effect of the image, thereby improving the user experience and reducing the image processing. Time required.
  • an imaging apparatus 1000 includes the image processing apparatus 100 and the image sensor 200 described above.
  • Image sensor 200 is used to generate a patch image.
  • the imaging device 1000 recognizes the face region according to the patch image, and processes the image outside the face region by using the first interpolation algorithm to improve the resolution and resolution of the image outside the face region, and the face is
  • the image in the region is processed by the second interpolation algorithm with less complexity than the first interpolation algorithm, which improves the image signal-to-noise ratio, resolution and resolution, reduces the data and processing time required, and improves the user experience. .
  • an electronic device 10000 includes the above-described imaging device 1000 and a touch screen 2000.
  • electronic device 10000 includes a cell phone or tablet.
  • Both the mobile phone and the tablet computer have a camera, that is, an imaging device 1000.
  • the image processing method of the embodiment of the present invention can be used to obtain a high-resolution picture.
  • the electronic device 10000 may also include other electronic devices having imaging functions.
  • imaging device 1000 includes a front camera or a rear camera.
  • many electronic devices 10000 include a front camera and a rear camera. Both the front camera and the rear camera can implement image processing by using the image processing method of the embodiment of the present invention to enhance the user experience.
  • an electronic device 10000 includes a housing 300, a processor 400, a memory 500, a circuit board 600, and a power supply circuit 700.
  • the circuit board 600 is disposed inside a space enclosed by the housing 300.
  • the processor 400 and the memory 500 are disposed on the circuit board 600.
  • the power circuit 700 is used to power various circuits or devices of the electronic device 10000.
  • the memory 500 is for storing executable program code.
  • the processor 400 runs a program corresponding to the executable program code by reading the executable program code stored in the memory 500 for executing the image processing method of any of the above embodiments.
  • processor 400 can be used to perform the following steps:
  • the pseudo original image includes an original pixel arranged in an array, the original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel, and the color block image is converted into a pseudo original image.
  • the steps include the following steps:
  • the associated pixel When the associated pixel is located outside the face region, it is determined whether the color of the current pixel is the same as the color of the associated pixel;
  • the pixel value of the associated pixel is used as the image of the current pixel.
  • the pixel value of the current pixel is calculated according to the pixel value of the associated pixel unit by using a first interpolation algorithm, where the image pixel unit includes an associated pixel unit, and the color of the associated pixel unit is the same as the current pixel. Adjacent to the current pixel; and
  • the pixel value of the current pixel is calculated by the second interpolation algorithm, and the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (control methods) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the embodiments of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Abstract

本发明公开了一种图像处理方法,用于处理图像传感器输出的色块图像以输出仿原图像。图像处理方法包括以下步骤:(S20)根据色块图像识别人脸区域;(S40)将色块图像转化成仿原图像。本发明实施方式的图像处理方法根据色块图像识别人脸区域,对人脸区域外的图像采用第一插值算法进行处理,以提高人脸区域外的图像的分辨率及解析度,对人脸区域内的图像采用复杂度小于第一插值算法的第二插值算法进行处理,在提高图像信噪比、分辨率和解析度的同时,减少了所需处理的数据和处理时间,提升了用户体验。此外,本发明还公开了一种图像处理装置(100)、成像装置(1000)及电子装置(10000)。

Description

图像处理方法、图像处理装置、成像装置及电子装置
优先权信息
本申请请求2016年11月29日向中国国家知识产权局提交的、专利申请号为201611079541.3的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本发明涉及图像处理技术,特别涉及一种图像处理方法、图像处理装置、成像装置及电子装置。
背景技术
现有的一种图像传感器包括感光像素单元阵列和设置在感光像素单元阵列上的滤光片单元阵列,每个滤光片单元阵列覆盖对应一个感光像素单元,每个感光像素单元包括多个感光像素。工作时,可以控制图像传感器曝光输出合并图像,合并图像可以通过图像处理方法转化成合并真彩图像并保存下来。合并图像包括合并像素阵列,同一感光像素单元的多个感光像素合并输出作为对应的合并像素。如此,可以提高合并图像的信噪比,然而,合并图像的解析度降低。当然,也可以控制图像传感器曝光输出高像素的色块图像,色块图像包括原始像素阵列,每个感光像素对应一个原始像素。然而,由于同一滤光片单元对应的多个原始像素颜色相同,同样无法提高色块图像的解析度。因此,需要通过插值计算的方式将高像素色块图像转化成高像素的仿原图像,仿原图像可以包括呈拜耳阵列排布的仿原像素。仿原图像可以通过图像处理方法转化成仿原真彩图像并保存下来。插值计算可以提高真彩图像的清晰度,然而耗费资源且耗时,导致拍摄时间加长,此外,在实际应用中,对真彩图像的某些部分,如人脸等进行高清晰度处理反而会降低用户体验。
发明内容
本发明实施方式提供一种图像处理方法、图像处理装置、成像装置及电子装置。
本发明实施方式的图像处理方法,用于处理图像传感器输出的色块图像以输出仿原图像,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素对应一个所述原始像素,所述图像处理方法包括以下步骤:
根据所述色块图像识别人脸区域;
将所述色块图像转化成仿原图像,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述将所述色块图像转化成仿原图像的步骤包括以下步骤:
判断所述关联像素是否位于所述人脸区域外;
在所述关联像素位于所述人脸区域外时,判断所述当前像素的颜色与所述关联像素的颜色是否相同;
在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;和
在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
在所述关联像素位于所述人脸区域时,通过第二插值算法计算所述当前像素的像素值,所述第二插值算法的复杂度小于所述第一插值算法。
在某些实施方式中,所述预定阵列包括拜耳阵列。
在某些实施方式中,所述图像像素单元包括2*2阵列的所述原始像素。
在某些实施方式中,所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤,包括以下步骤:
计算所述关联像素各个方向上的渐变量;
计算所述关联像素各个方向上的权重;和
根据所述渐变量及所述权重计算所述当前像素的像素值。
在某些实施方式中,所述图像处理方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前,包括以下步骤:
对所述色块图像做白平衡补偿;
所述图像处理方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤后包括以下步骤:
对所述仿原图像做白平衡补偿还原。
在某些实施方式中,所述图像处理方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前,包括以下步骤:
对所述色块图像做坏点补偿。
在某些实施方式中,所述图像处理方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前,包括以下步骤:
对所述色块图像做串扰补偿。
在某些实施方式中,所述图像处理方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤后,包括以下步骤:
对所述仿原图像进行镜片形状校正、去马赛克、降噪和边缘锐化处理。
本发明实施方式的图像处理装置,用于处理图像传感器输出的色块图像以输出仿原图像,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素对应一个所述原始像素,所述图像处理装置包括:
识别模块,所述识别模块用于根据所述色块图像识别人脸区域;
转化模块,所述转化模块用于将所述色块图像转化成仿原图像,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素;
所述转化模块包括:
第一判断单元,所述第一判断单元用于判断所述关联像素是否位于所述人脸区域外;
第二判断单元,所述第二判断单元用于在所述关联像素位于所述人脸区域外时,判断所述当前像素的颜色与所述关联像素的颜色是否相同;
第一计算单元,所述第一计算单元用于在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;
第二计算单元,所述第二计算单元用于在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
第三计算单元,所述第三计算单元用于在所述关联像素位于所述人脸区域时,通过第二插值算法计算所述当前像素的像素值,所述第二插值算法的复杂度小于所述第一插值算法。
在某些实施方式中,所述预定阵列包括拜耳阵列。
在某些实施方式中,所述图像像素单元包括2*2阵列的所述原始像素。
在某些实施方式中,所述第二计算单元包括:
第一计算子单元,所述第一计算子单元用于计算所述关联像素各个方向上的渐变量;
第二计算子单元,所述第二计算子单元用于计算所述关联像素各个方向上的权重;和
第三计算子单元,所述第三计算子单元用于根据所述渐变量及所述权重计算所述当前 像素的像素值。
在某些实施方式中,所述转化模块包括:
白平衡补偿单元,所述白平衡补偿单元用于对所述色块图像做白平衡补偿;
白平衡补偿还原单元,所述白平衡补偿还原单元用于对所述仿原图像做白平衡补偿还原。
在某些实施方式中,所述转化模块包括:
坏点补偿单元,所述坏点补偿单元用于对所述色块图像做坏点补偿。
在某些实施方式中,所述转化模块包括:
串扰补偿单元,所述串扰补偿单元用于对所述色块图像做串扰补偿。
在某些实施方式中,所述转化模块包括:
处理单元,所述处理单元用于对所述仿原图像进行镜片形状校正、去马赛克、降噪和边缘锐化处理。
本发明实施方式的成像装置包括上述的图像处理装置和图像传感器,所述图像传感器用于产生所述色块图像。
本发明实施方式的电子装置包括上述的成像装置触摸屏。
在某些实施方式中,所述电子装置包括手机或平板电脑。
在某些实施方式中,所述成像装置包括前置相机或后置相机。
本发明实施方式的电子装置,包括壳体、处理器、存储器、电路板和电源电路,其特征在于,所述电路板安置在所述壳体围成的空间内部,所述处理器和所述存储器设置在所述电路板上;所述电源电路用于为所述电子装置的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行上述图像处理方法。
本发明实施方式的图像处理方法、图像处理装置、成像装置及电子装置,根据色块图像识别人脸区域,对人脸区域外的图像采用第一插值算法进行处理,以提高人脸区域外的图像的分辨率及解析度,对人脸区域内的图像采用复杂度小于第一插值算法的第二插值算法进行处理,在提高图像信噪比、分辨率和解析度的同时,减少了所需处理的数据和处理时间,提升了用户体验。
本发明的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。
附图说明
本发明的上述和/或附加的方面和优点可以从结合下面附图对实施方式的描述中将变 得明显和容易理解,其中:
图1是本发明实施方式的图像处理方法的流程示意图;
图2是本发明实施方式的图像处理装置的模块示意图;
图3是本发明实施方式的图像传感器的模块示意图;
图4是本发明实施方式的图像传感器的电路示意图;
图5是本发明实施方式的滤光片单元的示意图;
图6是本发明实施方式的图像传感器的结构示意图;
图7是本发明实施方式的合并图像状态示意图;
图8是本发明实施方式的色块图像的状态示意图;
图9是本发明实施方式的控制方法的状态示意图;
图10是本发明实施方式的控制方法的流程示意图;
图11是本发明某些实施方式的第二计算模块的模块示意图;
图12是本发明某些实施方式的图像处理方法的流程示意图;
图13是本发明某些实施方式的图像处理装置的模块示意图;
图14是本发明某些实施方式的色块图像的图像像素单元示意图;
图15是本发明某些实施方式的图像处理方法的流程示意图;
图16是本发明某些实施方式的图像处理装置的模块示意图;
图17是本发明某些实施方式的图像处理方法的状态示意图;
图18是本发明实施方式的成像装置的模块示意图;
图19是本发明实施方式的电子装置的模块示意图;
图20是本发明实施方式的电子装置的模块示意图;
主要元件及符号说明:
图像处理装置100、识别模块120、转化模块140、第一判断单元141、第二判断单元142、第一计算单元143、第二计算单元144、第一计算子单元1441、第二计算子单元1442、第三计算子单元1443、第三计算单元145、白平衡补偿单元146、白平衡补偿还原单元147、坏点补偿单元148、串扰补偿单元149、处理单元150;
图像传感器200、感光像素单元阵列210、感光像素单元210a、感光像素212、感光像素子单元2120、感光器件2121、传输管2122、源极跟随器2123、模数转换器2124、加法器213、滤光片单元阵列220、滤光片单元220a;
壳体300、处理器400、存储器500、电路板600、电源电路700;
成像装置1000;
触摸屏2000;
电子装置10000。
具体实施方式
下面详细描述本发明的实施方式,实施方式的示例在附图中示出,其中,相同或类似的标号自始至终表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。
在本发明的实施方式的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“长度”、“宽度”、“厚度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”、“顺时针”、“逆时针”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明的实施方式和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的实施方式的限制。此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个所述特征。在本发明的实施方式的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
在本发明的实施方式的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接或可以相互通讯;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明的实施方式中的具体含义。
在本发明的实施方式中,除非另有明确的规定和限定,第一特征在第二特征之“上”或之“下”可以包括第一和第二特征直接接触,也可以包括第一和第二特征不是直接接触而是通过它们之间的另外的特征接触。而且,第一特征在第二特征“之上”、“上方”和“上面”包括第一特征在第二特征正上方和斜上方,或仅仅表示第一特征水平高度高于第二特征。第一特征在第二特征“之下”、“下方”和“下面”包括第一特征在第二特征正下方和斜下方,或仅仅表示第一特征水平高度小于第二特征。
下文的公开提供了许多不同的实施方式或例子用来实现本发明的实施方式的不同结构。为了简化本发明的实施方式的公开,下文中对特定例子的部件和设置进行描述。当然,它们仅仅为示例,并且目的不在于限制本发明。此外,本发明的实施方式可以在不同例子中重复参考数字和/或参考字母,这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施方式和/或设置之间的关系。此外,本发明的实施方式提供了的各种特定的工艺和 材料的例子,但是本领域普通技术人员可以意识到其他工艺的应用和/或其他材料的使用。
请参阅图1,本发明实施方式的图像处理方法用于处理图像传感器200输出的色块图像以输出仿原图像。图像传感器200包括感光像素单元阵列210和设置在感光像素单元阵列210上的滤光片单元阵列220。每个滤光片单元阵列220覆盖对应一个感光像素单元210a。每个感光像素单元210a包括多个感光像素212。色块图像包括预定阵列排布的图像像素单元。图像像素单元包括多个原始像素。每个感光像素212对应一个原始像素。图像处理方法包括以下步骤:
步骤S20:根据色块图像识别人脸区域;
步骤S40:将色块图像转化成仿原图像,仿原图像包括阵列排布的仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素,步骤S40包括以下步骤:
步骤S41:判断关联像素是否位于人脸区域外;
步骤S42:在关联像素位于人脸区域外时,判断当前像素的颜色与关联像素的颜色是否相同;
步骤S43:在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;和
步骤S44:在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;和
步骤S45:在关联像素位于人脸区域时,通过第二插值算法计算当前像素的像素值,第二插值算法的复杂度小于第一插值算法。
请参阅图2,本发明实施方式的图像处理装置100用于处理图像传感器200输出的色块图像以输出仿原图像。图像传感器200包括感光像素单元阵列210和设置在感光像素单元阵列210上的滤光片单元阵列220。每个滤光片单元阵列220覆盖对应一个感光像素单元210a。每个感光像素单元210a包括多个感光像素212。色块图像包括预定阵列排布的图像像素单元。图像像素单元包括多个原始像素。每个感光像素212对应一个原始像素。图像处理装置100包括识别模块120和转化模块140。转化模块140包括第一判断单元141、第二判断单元142、第一计算单元143、第二计算单元144和第三计算单元145。
本发明实施方式的图像处理方法可以由本发明实施方式的图像处理装置100实现。例如,步骤S20可以由识别模块120实现,步骤S40可以由转化模块140实现,步骤S41可以由第一判断单元141实现,步骤S42可以由第二判断单元142实现,步骤S43可以由第一计算单元143实现,步骤S44可以由第二计算单元144实现,步骤S45可以由第三计算单元145实现。
也即是说,识别模块120用于根据色块图像识别人脸区域。转化模块140用于将色块图像转化成仿原图像。仿原图像包括阵列排布的仿原像素。仿原像素包括当前像素。原始像素包括与当前像素对应的关联像素。第一判断单元141用于判断关联像素是否位于人脸区域外。第二判断单元142用于在关联像素位于人脸区域外时,判断当前像素的颜色与关联像素的颜色是否相同。第一计算单元143用于在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值。第二计算单元144用于在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值。图像像素单元包括关联像素单元。关联像素单元的颜色与当前像素相同且与当前像素相邻。第三计算单元145用于在关联像素位于人脸区域时,通过第二插值算法计算当前像素的像素值。第二插值算法的复杂度小于第一插值算法。
本发明实施方式的图像处理方法及图像处理装置100根据色块图像识别人脸区域,对人脸区域外的图像采用第一插值算法进行处理,以提高人脸区域外的图像的分辨率及解析度,对人脸区域内的图像采用复杂度小于第一插值算法的第二插值算法进行处理,在提高图像信噪比、分辨率和解析度的同时,减少了所需处理的数据和处理时间,提升了用户体验。
在某些实施方式中,第二插值算法的时间复杂度和空间复杂度都比第一插值算法小。算法的复杂度包括时间复杂度和空间复杂度,时间复杂度用来度量算法需要耗费的时间,空间复杂度是用来度量算法需要耗费的存储空间。时间复杂度小说明算法需要耗费的时间少,空间复杂度小说明算法需要耗费的存储空间小,因此,利用第二插值算法有利于提高运算速度,使得拍照过程更加流畅,提高用户体验。
在一个例子中,步骤S20的过程可以为将色块图像送特定的人脸识别算法库,以检测是否存在人脸。检测方法可以为将提取色块图像的特征数据,然后与数据库中存储的人脸的特征模板进行搜索匹配,并设定一个阈值,当特征数据与特征模板的相似度超过该阈值时,则判定为存在人脸,并记录人脸的范围坐标再传送到转化模块140,由转化模块140分区域地进行后续的图像处理。
请一并参阅图3至图6,本发明实施方式的图像传感器200包括感光像素单元阵列210和设置在感光像素单元阵列210上的滤光片单元阵列220。
进一步地,感光像素单元阵列210包括多个感光像素单元210a,每个感光像素单元210a包括多个相邻的感光像素212。每个感光像素212包括一个感光器件2121和一个传输管2122,其中,感光器件2121可以是光电二极管,传输管2122可以是MOS晶体管。
滤光片单元阵列220包括多个滤光片单元220a,每个滤光片单元220a覆盖对应一个感光像素单元210a。
具体地,在某些示例中,滤光片单元阵列220包括拜耳阵列,也即是说,相邻的四个滤光片单元220a分别为一个红色滤光片单元、一个蓝色滤光片单元和两个绿色滤光片单元。
每一个感光像素单元210a对应同一颜色的滤光片单元220a,若一个感光像素单元210a中一共包括n个相邻的感光器件2121,那么一个滤光片单元220a覆盖一个感光像素单元210a中的n个感光器件2121,该滤光片单元220a可以是一体构造,也可以由n个独立的子滤光片组装连接在一起。
在某些实施方式中,每个感光像素单元210a包括四个相邻的感光像素212,相邻两个感光像素212共同构成一个感光像素子单元2120,感光像素子单元2120还包括一个源极跟随器2123及一个模数转换器2124。感光像素单元210a还包括一个加法器213。其中,一个感光像素子单元2120中的每个传输管2122的一端电极被连接到对应感光器件2121的阴极电极,每个传输管2122的另一端被共同连接至源极跟随器2123的闸极电极,并通过源极跟随器2123源极电极连接至一个模数转换器2124。其中,源极跟随器2123可以是MOS晶体管。两个感光像素子单元2120通过各自的源极跟随器2123及模数转换器2124连接至加法器213。
也即是说,本发明实施方式的图像传感器200的一个感光像素单元210a中相邻的四个感光器件2121共用一个同颜色的滤光片单元220a,每个感光器件2121对应连接一个传输管2122,相邻两个感光器件2121共用一个源极跟随器2123和一个模数转换器2124,相邻的四个感光器件2121共用一个加法器213。
进一步地,相邻的四个感光器件2121呈2*2阵列排布。其中,一个感光像素子单元2120中的两个感光器件2121可以处于同一列。
在成像时,当同一滤光片单元220a下覆盖的两个感光像素子单元2120或者说四个感光器件2121同时曝光时,可以对像素进行合并进而可输出合并图像。
具体地,感光器件2121用于将光照转化为电荷,且产生的电荷与光照强度成比例关系,传输管2122用于根据控制信号来控制电路的导通或断开。当电路导通时,源极跟随器2123用于将感光器件2121经光照产生的电荷信号转化为电压信号。模数转换器2124用于电压信号转换为数字信号。加法器213用于将两路数字信号相加共同输出。
请参阅图7,以16M的图像传感器200举例来说,本发明实施方式的图像传感器200可以将16M的感光像素212合并成4M,或者说,输出合并图像,合并后,感光像素212的大小相当于变成了原来大小的4倍,从而提升了感光像素212的感光度。此外,由于图像传感器200中的噪声大部分都是随机噪声,对于合并之前的感光像素212的来说,有可能其中一个或两个像素中存在噪点,而在将四个感光像素212合并成一个大的感光像素212后,减小了噪点对该大像素的影响,也即是减弱了噪声,提高了信噪比。
但在感光像素212大小变大的同时,由于像素值降低,合并图像的解析度也将降低。
在成像时,当同一滤光片单元220a下覆盖的四个感光器件2121依次曝光时,经过图像处理可以输出色块图像。
具体地,感光器件2121用于将光照转化为电荷,且产生的电荷与光照强度成比例关系,传输管2122用于根据控制信号来控制电路的导通或断开。当电路导通时,源极跟随器2123用于将感光器件2121经光照产生的电荷信号转化为电压信号。模数转换器2124用于电压信号转换为数字信号,以供与图像传感器200相连的图像处理装置100处理。
请参阅图8,以16M的图像传感器200举例来说,本发明实施方式的图像传感器200还可以保持16M的感光像素212输出,或者说输出色块图像,色块图像包括图像像素单元,图像像素单元包括2*2阵列排布的原始像素,该原始像素的大小与感光像素212大小相同,然而由于覆盖相邻四个感光器件2121的滤光片单元220a为同一颜色,也即是说,虽然四个感光器件2121分别曝光,但其覆盖的滤光片单元220a颜色相同,因此,输出的每个图像像素单元的相邻四个原始像素颜色相同,仍然无法提高图像的解析度。
本发明实施方式的图像处理方法,可以用于对输出的色块图像进行处理,以得到仿原图像。
可以理解,合并图像在输出时,四个相邻的同色感光像素212以合并像素输出,如此,合并图像中的四个相邻的合并像素仍可看作是典型的拜耳阵列,可以直接被图像处理装置100接收进行处理以输出合并真彩图像。而色块图像在输出时每个感光像素212分别输出,由于相邻四个感光像素212颜色相同,因此,一个图像像素单元的四个相邻原始像素的颜色相同,是非典型的拜耳阵列。而图像处理装置100无法对非典型拜耳阵列直接进行处理,也即是说,在图像传感器200采用同一图像处理装置100时,为兼容两种模式的真彩图像输出即合并模式下的合并真彩图像输出及色块模式下的仿原真彩图像输出,需将色块图像转化为仿原图像,或者说将非典型拜耳阵列的图像像素单元转化为典型拜耳阵列的像素排布。
仿原图像包括呈拜耳阵列排布的仿原像素。仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素。
请参阅图9,以图9为例,当前像素为R3’3’和R5’5’,对应的关联像素分别为R33和B55。
在获取当前像素R3’3’时,由于R33’与对应的关联像素R33的颜色相同,因此在转化时直接将R33的像素值作为R33’的像素值。
在获取当前像素R5’5’时,由于R5’5’与对应的关联像素B55的颜色不相同,显然不能直接将B55的像素值作为R5’5’的像素值,需要根据R5’5’的关联像素单元通过第一插值算 法计算得到。
需要说明的是,以上及下文中的像素值应当广义理解为该像素的颜色属性数值,例如色彩值。
关联像素单元包括多个,例如4个,颜色与当前像素相同且与当前像素相邻的图像像素单元中的原始像素。
需要说明的是,此处相邻应做广义理解,以图9为例,R5’5’对应的关联像素为B55,与B55所在的图像像素单元相邻的且与R5’5’颜色相同的关联像素单元所在的图像像素单元分别为R44、R74、R47、R77所在的图像像素单元,而并非在空间上距离B55所在的图像像素单元更远的其他的红色图像像素单元。其中,与B55在空间上距离最近的红色原始像素分别为R44、R74、R47和R77,也即是说,R55’的关联像素单元由R44、R74、R47和R77组成,R5’5’与R44、R74、R47和R77的颜色相同且相邻。
如此,针对不同情况的当前像素,采用不同方式的将原始像素转化为仿原像素,从而将色块图像转化为仿原图像,由于拍摄图像时,采用了特殊的拜耳阵列结构的滤光片,提高了图像信噪比,并且在图像处理过程中,通过第一插值算法对色块图像进行插值处理,提高了图像的分辨率及解析度。
请参阅图10,在某些实施方式中,步骤S44包括以下步骤:
步骤S441:计算关联像素单元各个方向上的渐变量;
步骤S442:计算关联像素单元各个方向上的权重;和
步骤S443:根据渐变量及权重计算当前像素的像素值。
请参阅图11,在某些实施方式中,第二计算单元144包括第一计算子单元1441、第二计算子单元1442和第三计算子单元1443。步骤S441可以由第一计算子单元1441实现,步骤S442可以由第二计算子单元1442实现,步骤S443可以由第三计算子单元1443实现。或者说,第一计算子单元1441用于计算关联像素单元各个方向上的渐变量,第二计算子单元1442用于计算关联像素单元各个方向上的权重,第三计算子单元1443用于根据渐变量及权重计算当前像素的像素值。
具体地,第一插值算法是参考图像在不同方向上的能量渐变,将与当前像素对应的颜色相同且相邻的关联像素单元依据在不同方向上的渐变权重大小,通过线性插值的方式计算得到当前像素的像素值。其中,在能量变化量较小的方向上,参考比重较大,因此,在插值计算时的权重较大。
在某些示例中,为方便计算,仅考虑水平和垂直方向。
R5’5’由R44、R74、R47和R77插值得到,而在水平和垂直方向上并不存在颜色相同的原始像素,因此需首根据关联像素单元计算在水平和垂直方向上该颜色的分量。其中, 水平方向上的分量为R45和R75、垂直方向的分量为R54和R57可以分别通过R44、R74、R47和R77计算得到。
具体地,R45=R44*2/3+R47*1/3,R75=2/3*R74+1/3*R77,R54=2/3*R44+1/3*R74,R57=2/3*R47+1/3*R77。
然后,分别计算在水平和垂直方向的渐变量及权重,也即是说,根据该颜色在不同方向的渐变量,以确定在插值时不同方向的参考权重,在渐变量小的方向,权重较大,而在渐变量较大的方向,权重较小。其中,在水平方向的渐变量X1=|R45-R75|,在垂直方向上的渐变量X2=|R54-R57|,W1=X1/(X1+X2),W2=X2/(X1+X2)。
如此,根据上述可计算得到,R5’5’=(2/3*R45+1/3*R75)*W2+(2/3*R54+1/3*R57)*W1。可以理解,若X1大于X2,则W1大于W2,因此计算时水平方向的权重为W2,而垂直方向的权重为W1,反之亦反。
如此,可根据第一插值算法计算得到当前像素的像素值。依据上述对关联像素的处理方式,可将原始像素转化为呈典型拜耳阵列排布的仿原像素,也即是说,相邻的四个2*2阵列的仿原像素包括一个红色仿原像素,两个绿色仿原像素和一个蓝色仿原像素。
需要说明的是,第一插值算法包括但不限于本实施例中公开的在计算时仅考虑垂直和水平两个方向相同颜色的像素值的方式,例如还可以参考其他颜色的像素值。
请参阅图12及图13,在某些实施方式中,步骤S44前包括步骤:
步骤S46:对色块图像做白平衡补偿;
步骤S44后包括步骤:
步骤S47:对仿原图像做白平衡补偿还原。
在某些实施方式中,转化模块140包括白平衡补偿单元146和白平衡补偿还原单元147。步骤S46可以由白平衡补偿单元146实现,步骤S47可以由白平衡补偿还原单元147实现。或者说,白平衡补偿单元146用于对色块图像做白平衡补偿,白平衡补偿还原单元147用于对仿原图像做白平衡补偿还原。
具体地,在一些示例中,在将色块图像转化为仿原图像的过程中,在第一插值算法中,红色和蓝色仿原像素往往不仅参考与其颜色相同的通道的原始像素的颜色,还会参考绿色通道的原始像素的颜色权重,因此,在第一插值算法前需要进行白平衡补偿,以在插值计算中排除白平衡的影响。为了不破坏色块图像的白平衡,因此,在插值之后需要将仿原图像进行白平衡补偿还原,还原时根据在补偿中红色、绿色及蓝色的增益值进行还原。
如此,可排除在第一插值算法过程中白平衡的影响,并且能够使得插值后得到的仿原图像保持色块图像的白平衡。
请再次参阅图12及图13,在某些实施方式中,步骤S44前包括步骤:
步骤S48:对色块图像做坏点补偿。
在某些实施方式中,转化模块140包括坏点补偿单元148。步骤S48可以由坏点补偿单元148实现。或者说,坏点补偿单元148用于对色块图像做坏点补偿。
可以理解,受限于制造工艺,图像传感器200可能会存在坏点,坏点通常不随感光度变化而始终呈现同一颜色,坏点的存在将影响图像质量,因此,为保证插值的准确,不受坏点的影响,需要在第一插值算法前进行坏点补偿。
具体地,坏点补偿过程中,可以对原始像素进行检测,当检测到某一原始像素为坏点时,可根据其所在的图像像素单元的其他原始像的像素值进行坏点补偿。
如此,可排除坏点对插值处理的影响,提高图像质量。
请再次参阅图12及图13,在某些实施方式中,步骤S44前包括步骤:
步骤S49:对色块图像做串扰补偿。
在某些实施方式中,转化包括串扰补偿单元149。步骤S49可以由串扰补偿单元149实现。或者说,串扰补偿单元149用于对色块图像做串扰补偿。
具体的,一个感光像素单元210a中的四个感光像素212覆盖同一颜色的滤光片,而感光像素212之间可能存在感光度的差异,以至于以仿原图像转化输出的真彩图像中的纯色区域会出现固定型谱噪声,影响图像的质量。因此,需要对色块图像进行串扰补偿。
如上述解释说明,进行串扰补偿,需要在成像装置1000的图像传感器200制造过程中获得补偿参数,并将串扰补偿的相关参数预置于成像装置1000的存储器中或装设成像装置1000的电子装置10000例如手机或平板电脑中。
预定光环境例如可包括LED匀光板,5000K左右的色温,亮度1000勒克斯左右,成像参数可包括增益值,快门值及镜头位置。设定好相关参数后,进行串扰补偿参数的获取。
处理过程中,首先在设定的光环境中以设置好的成像参数,获取多张色块图像,并合并成一张色块图像,如此可减少以单张色块图像作为校准基础的噪声影响。
请参阅图14,以图14中的图像像素单元Gr为例,其包括Gr1、Gr2、Gr3和Gr4,串扰补偿目的在于将感光度可能存在差异的感光像素212通过补偿进本校准至同一水平。该图像像素单元的平均像素值为Gr_avg=(Gr1+Gr2+Gr3+Gr4)/4,可基本表征这四个感光像素212的感光度的平均水平,以此平均值作为基础值,分别计算Gr1/Gr_avg,Gr2/Gr_avg,Gr3/Gr_avg和Gr4/Gr_avg,可以理解,通过计算每一个原始像素的像素值与该图像像素单元的平均像素值的比值,可以基本反映每个原始像素与基础值的偏差,记录四个比值并作为补偿参数记录到相关装置的存储器中,以在成像时进行调取对每个原始像素进行补偿,从而减少串扰,提高图像质量。
通常,在设定串扰补偿参数后还应当验证所设定的参数是否准确。
验证过程中,首先以相同的光环境和成像参数获取一张色块图像,依据计算得到的补偿参数对该色块图像进行串扰补偿,计算补偿后的Gr’_avg、Gr’1/Gr’_avg、Gr’2/Gr’_avg、Gr’3/Gr’_avg和Gr’4/Gr’_avg。根据计算结果判断补偿参数是否准确,判断可根据宏观与微观两个角度考虑。微观是指某一个原始像素在补偿后仍然偏差较大,成像后易被使用者感知,而宏观则从全局角度,也即是在补偿后仍存在偏差的原始像素的总数目较多时,此时即便单独的每一个原始像素的偏差不大,但作为整体仍然会被使用者感知。因此,应当设置针对微观设置一个比例阈值即可,针对宏观需设置一个比例阈值和一个数量阈值。如此,可对设置的串扰补偿参数进行验证,确保补偿参数的正确,以减少串扰对图像质量的影响。
请参阅图15和图16,在某些实施方式中,步骤S44后还包括步骤:
步骤S50:对仿原图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
在某些实施方式中,转化模块140包括处理单元150。步骤S50可以由处理单元150实现,或者说,处理单元150用于对仿原图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
可以理解,将色块图像转化为仿原图像后,仿原像素排布为典型的拜耳阵列,可采用处理模块进行处理,处理过程中包括镜片阴影校正、去马赛克、降噪和边缘锐化处理,如此,处理后即可得到真彩图像输出给用户。
对于一帧色块图像的人脸区域内的图像,需利用第二插值算法进行图像处理。第二插值算法的插值过程是:对人脸区域内的每一个图像像素单元中所有的原始像素的像素值取均值,随后判断当前像素与关联像素的颜色是否相同,在当前像素与关联像素颜色值相同时,取关联像素的像素值作为当前像素的像素值,在当前像素与关联像素颜色不同时,取最邻近的与当前像素值颜色相同的图像像素单元中的原始像素的像素值作为当前像素的像素值。
具体地,请参阅图17,以图17为例,先计算图像像素单元中各个原始像素的像素值:Ravg=(R1+R2+R3+R4)/4,Gravg=(Gr1+Gr2+Gr3+Gr4)/4,Gbavg=(Gb1+Gb2+Gb3+Gb4)/4,Bavg=(B1+B2+B3+B4)/4。此时,R11、R12、R21、R22的像素值均为Ravg,Gr31、Gr32、Gr41、Gr42的像素值均为Gravg,Gb13、Gb14、Gb23、Gb24的像素值均为Gbavg,B33、B34、B43、B44的像素值均为Bavg。以当前像素B22为例,当前像素B22对应的关联像素为R22,由于当前像素B22的颜色与关联像素R22的颜色不同,因此当前像素B22的像素值应取最邻近的蓝色滤光片对应的像素值即取B33、B34、B43、B44中任一Bavg的值。同样地,其他颜色也采用第二插值算法进行计算以得到各个像素的像素值。
如此,采用第二插值算法,由非典型拜耳阵列转换成典型的拜耳阵列过程中所需的复杂度较小,第二插值算法同样能提升仿原图像的解析度,但图像的仿原效果比第一插值算 法的仿原效果略差。因此,用第一插值算法处理人脸区域外的图像,而采用第二插值算法处理人脸区域内的图像,提升图像的解析度和仿原效果,提升了用户体验,同时减少了图像处理所需的时间。
请参阅图18,本发明实施方式的成像装置1000包括上述的图像处理装置100和图像传感器200。图像传感器200用于产生色块图像。
本发明实施方式的成像装置1000根据色块图像识别人脸区域,对人脸区域外的图像采用第一插值算法进行处理,以提高人脸区域外的图像的分辨率及解析度,对人脸区域内的图像采用复杂度小于第一插值算法的第二插值算法进行处理,在提高图像信噪比、分辨率和解析度的同时,减少了所需处理的数据和处理时间,提升了用户体验。
请参阅图19,本发明实施方式的电子装置10000包括上述成像装置1000和触摸屏2000。
在某些实施方式中,电子装置10000包括手机或平板电脑。
手机和平板电脑均带有摄像头即成像装置1000,用户使用手机或平板电脑进行拍摄时,可以采用本发明实施方式的图像处理方法,以得到高解析度的图片。
当然,电子装置10000也可以包括其他具有成像功能的电子设备。
在某些实施方式中,成像装置1000包括前置相机或后置相机。
可以理解,许多电子装置10000包括前置相机和后置相机,前置相机和后置相机均可采用本发明实施方式的图像处理方法实现图像处理,以提升用户体验。
请参阅图20,本发明实施方式的电子装置10000包括壳体300、处理器400、存储器500、电路板600和电源电路700。电路板600安置在壳体300围成的空间内部。处理器400和存储器500设置在电路板600上。电源电路700用于为电子装置10000的各个电路或器件供电。存储器500用于存储可执行程序代码。处理器400通过读取存储器500中存储的可执行程序代码来运行与可执行程序代码对应的程序,以用于执行上述任一实施方式的图像处理方法。
例如,处理器400可以用于执行以下步骤:
根据色块图像识别人脸区域;
将色块图像转化成仿原图像,仿原图像包括阵列排布的仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素,将色块图像转化成仿原图像的步骤包括以下步骤:
判断关联像素是否位于人脸区域外;
在关联像素位于人脸区域外时,判断当前像素的颜色与关联像素的颜色是否相同;
在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像 素值;和
在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;和
在关联像素位于人脸区域时,通过第二插值算法计算当前像素的像素值,第二插值算法的复杂度小于第一插值算法。
需要说明的是,前述对图像处理方法和图像处理装置100的解释说明也适用于本发明实施方式的电子装置10000,此处不再赘述。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理模块的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(控制方法),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的实施方式的各部分可以用硬件、软件、固件或它们的组合来实现。 在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明的各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。
尽管上面已经示出和描述了本发明的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施实施进行变化、修改、替换和变型。

Claims (21)

  1. 一种图像处理方法,用于处理图像传感器输出的色块图像以输出仿原图像,其特征在于,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素对应一个所述原始像素,所述图像处理方法包括以下步骤:
    根据所述色块图像识别人脸区域;
    将所述色块图像转化成仿原图像,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述将所述色块图像转化成仿原图像的步骤包括以下步骤:
    判断所述关联像素是否位于所述人脸区域外;
    在所述关联像素位于所述人脸区域外时,判断所述当前像素的颜色与所述关联像素的颜色是否相同;
    在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;和
    在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
    在所述关联像素位于所述人脸区域时,通过第二插值算法计算所述当前像素的像素值,所述第二插值算法的复杂度小于所述第一插值算法。
  2. 如权利要求1所述的图像处理方法,其特征在于,所述预定阵列包括拜耳阵列。
  3. 如权利要求1所述的图像处理方法,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
  4. 如权利要求1所述的图像处理方法,其特征在于,所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤,包括以下步骤:
    计算所述关联像素各个方向上的渐变量;
    计算所述关联像素各个方向上的权重;和
    根据所述渐变量及所述权重计算所述当前像素的像素值。
  5. 如权利要求1所述的图像处理方法,其特征在于,所述图像处理方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前,包括以下步骤:
    对所述色块图像做白平衡补偿;
    所述图像处理方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤后包括以下步骤:
    对所述仿原图像做白平衡补偿还原。
  6. 如权利要求1所述的图像处理方法,其特征在于,所述图像处理方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前,包括以下步骤:
    对所述色块图像做坏点补偿。
  7. 如权利要求1所述的图像处理方法,其特征在于,所述图像处理方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前,包括以下步骤:
    对所述色块图像做串扰补偿。
  8. 如权利要求1所述的图像处理方法,其特征在于,所述图像处理方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤后,包括以下步骤:
    对所述仿原图像进行镜片形状校正、去马赛克、降噪和边缘锐化处理。
  9. 一种图像处理装置,用于处理图像传感器输出的色块图像以输出仿原图像,其特征在于,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素对应一个所述原始像素,所述图像处理装置包括:
    识别模块,所述识别模块用于根据所述色块图像识别人脸区域;
    转化模块,所述转化模块用于将所述色块图像转化成仿原图像,所述仿原图像包括阵列排布的仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素;
    所述转化模块包括:
    第一判断单元,所述第一判断单元用于判断所述关联像素是否位于所述人脸区域外;
    第二判断单元,所述第二判断单元用于在所述关联像素位于所述人脸区域外时,判断所述当前像素的颜色与所述关联像素的颜色是否相同;
    第一计算单元,所述第一计算单元用于在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;
    第二计算单元,所述第二计算单元用于在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
    第三计算单元,所述第三计算单元用于在所述关联像素位于所述人脸区域时,通过第二插值算法计算所述当前像素的像素值,所述第二插值算法的复杂度小于所述第一插值算法。
  10. 如权利要求9所述的图像处理装置,其特征在于,所述预定阵列包括拜耳阵列。
  11. 如权利要求9所述的图像处理装置,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
  12. 如权利要求9所述的图像处理装置,其特征在于,所述第二计算单元包括:
    第一计算子单元,所述第一计算子单元用于计算所述关联像素各个方向上的渐变量;
    第二计算子单元,所述第二计算子单元用于计算所述关联像素各个方向上的权重;和
    第三计算子单元,所述第三计算子单元用于根据所述渐变量及所述权重计算所述当前像素的像素值。
  13. 如权利要求9所述的图像处理装置,其特征在于,所述转化模块包括:
    白平衡补偿单元,所述白平衡补偿单元用于对所述色块图像做白平衡补偿;
    白平衡补偿还原单元,所述白平衡补偿还原单元用于对所述仿原图像做白平衡补偿还原。
  14. 如权利要求9所述的图像处理装置,其特征在于,所述转化模块包括:
    坏点补偿单元,所述坏点补偿单元用于对所述色块图像做坏点补偿。
  15. 如权利要求9所述的图像处理装置,其特征在于,所述转化模块包括:
    串扰补偿单元,所述串扰补偿单元用于对所述色块图像做串扰补偿。
  16. 如权利要求9所述的图像处理装置,其特征在于,所述转化模块包括:
    处理单元,所述处理单元用于对所述仿原图像进行镜片形状校正、去马赛克、降噪和边缘锐化处理。
  17. 一种成像装置,其特征在于,包括:
    如权利要求9-16任意一项所述的图像处理装置;和
    图像传感器,用于产生所述色块图像。
  18. 一种电子装置,其特征在于,包括:
    如权利要求17所述的成像装置;和
    触摸屏。
  19. 如权利要求18所述的电子装置,其特征在于,所述电子装置包括手机或平板电脑。
  20. 如权利要求18所述的电子装置,其特征在于,所述成像装置包括前置相机或后置相机。
  21. 一种电子装置,包括壳体、处理器、存储器、电路板和电源电路,其特征在于,所述电路板安置在所述壳体围成的空间内部,所述处理器和所述存储器设置在所述电路板上;所述电源电路用于为所述电子装置的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行如权利要求1至8中任意一项所述的图像处理方法。
PCT/CN2017/085408 2016-11-29 2017-05-22 图像处理方法、图像处理装置、成像装置及电子装置 WO2018099011A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611079541.3A CN106604001B (zh) 2016-11-29 2016-11-29 图像处理方法、图像处理装置、成像装置及电子装置
CN201611079541.3 2016-11-29

Publications (1)

Publication Number Publication Date
WO2018099011A1 true WO2018099011A1 (zh) 2018-06-07

Family

ID=58595711

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/085408 WO2018099011A1 (zh) 2016-11-29 2017-05-22 图像处理方法、图像处理装置、成像装置及电子装置

Country Status (5)

Country Link
US (2) US10249021B2 (zh)
EP (1) EP3328075B1 (zh)
CN (1) CN106604001B (zh)
ES (1) ES2752655T3 (zh)
WO (1) WO2018099011A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106507019B (zh) * 2016-11-29 2019-05-10 Oppo广东移动通信有限公司 控制方法、控制装置、电子装置
CN106604001B (zh) 2016-11-29 2018-06-29 广东欧珀移动通信有限公司 图像处理方法、图像处理装置、成像装置及电子装置
CN106507068B (zh) * 2016-11-29 2018-05-04 广东欧珀移动通信有限公司 图像处理方法及装置、控制方法及装置、成像及电子装置
CN107808361B (zh) * 2017-10-30 2021-08-10 努比亚技术有限公司 图像处理方法、移动终端及计算机可读存储介质
CN108391111A (zh) * 2018-02-27 2018-08-10 深圳Tcl新技术有限公司 图像清晰度调节方法、显示装置及计算机可读存储介质
CN108776784A (zh) * 2018-05-31 2018-11-09 广东新康博思信息技术有限公司 一种基于图像识别的移动执法系统
CN108897178A (zh) * 2018-08-31 2018-11-27 武汉华星光电技术有限公司 彩色滤光片基板及显示面板
CN112839215B (zh) * 2019-11-22 2022-05-13 华为技术有限公司 摄像模组、摄像头、终端设备、图像信息确定方法及存储介质
CN114615438B (zh) * 2022-03-07 2023-09-15 江西合力泰科技有限公司 一种摄像头芯片表面黑点补偿方法

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103531603A (zh) * 2013-10-30 2014-01-22 上海集成电路研发中心有限公司 一种cmos图像传感器
CN104168403A (zh) * 2014-06-27 2014-11-26 深圳市大疆创新科技有限公司 基于拜尔颜色滤波阵列的高动态范围视频录制方法和装置
CN105578067A (zh) * 2015-12-18 2016-05-11 广东欧珀移动通信有限公司 图像生成方法、装置及终端设备
CN105611124A (zh) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 图像传感器、成像方法、成像装置及电子装置
WO2016122896A1 (en) * 2015-01-28 2016-08-04 Qualcomm Incorporated Graphics processing unit with bayer mapping
CN106357967A (zh) * 2016-11-29 2017-01-25 广东欧珀移动通信有限公司 控制方法、控制装置和电子装置
CN106454289A (zh) * 2016-11-29 2017-02-22 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN106504218A (zh) * 2016-11-29 2017-03-15 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN106507019A (zh) * 2016-11-29 2017-03-15 广东欧珀移动通信有限公司 控制方法、控制装置、电子装置
CN106604001A (zh) * 2016-11-29 2017-04-26 广东欧珀移动通信有限公司 图像处理方法、图像处理装置、成像装置及电子装置

Family Cites Families (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW563365B (en) * 2001-09-24 2003-11-21 Winbond Electronics Corp Image compensation processing method of digital camera
JP4458236B2 (ja) * 2003-09-30 2010-04-28 パナソニック株式会社 固体撮像装置
JP4816336B2 (ja) 2006-02-07 2011-11-16 日本ビクター株式会社 撮像方法及び撮像装置
US7773138B2 (en) 2006-09-13 2010-08-10 Tower Semiconductor Ltd. Color pattern and pixel level binning for APS image sensor using 2×2 photodiode sharing scheme
JPWO2008053791A1 (ja) * 2006-10-31 2010-02-25 三洋電機株式会社 撮像装置および撮像装置における映像信号生成方法
JP4795929B2 (ja) * 2006-12-26 2011-10-19 富士通株式会社 補間方法を決定するプログラム、装置、および方法
JP5053654B2 (ja) * 2007-02-09 2012-10-17 オリンパスイメージング株式会社 画像処理装置およびその方法と電子カメラ
JP4359634B2 (ja) * 2007-06-21 2009-11-04 シャープ株式会社 カラー固体撮像装置、および画素信号の読み出し方法
US8295594B2 (en) * 2007-10-09 2012-10-23 Samsung Display Co., Ltd. Systems and methods for selective handling of out-of-gamut color conversions
JP2010028722A (ja) * 2008-07-24 2010-02-04 Sanyo Electric Co Ltd 撮像装置及び画像処理装置
JP5219778B2 (ja) * 2008-12-18 2013-06-26 キヤノン株式会社 撮像装置及びその制御方法
CN101815157B (zh) 2009-02-24 2013-01-23 虹软(杭州)科技有限公司 图像及视频的放大方法与相关的图像处理装置
KR101335127B1 (ko) * 2009-08-10 2013-12-03 삼성전자주식회사 에지 적응적 보간 및 노이즈 필터링 방법, 컴퓨터로 판독 가능한 기록매체 및 휴대 단말
US8724928B2 (en) * 2009-08-31 2014-05-13 Intellectual Ventures Fund 83 Llc Using captured high and low resolution images
JP2011248576A (ja) * 2010-05-26 2011-12-08 Olympus Corp 画像処理装置、撮像装置、プログラム及び画像処理方法
US8803994B2 (en) * 2010-11-18 2014-08-12 Canon Kabushiki Kaisha Adaptive spatial sampling using an imaging assembly having a tunable spectral response
US8878950B2 (en) * 2010-12-14 2014-11-04 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using super-resolution processes
JP5701374B2 (ja) * 2011-02-21 2015-04-15 三菱電機株式会社 画像拡大装置及び方法
WO2013008517A1 (ja) * 2011-07-08 2013-01-17 オリンパス株式会社 撮像装置及び画像生成方法
CN103650487B (zh) * 2011-07-13 2015-04-29 富士胶片株式会社 图像拾取设备、图像拾取元件,和用于校正灵敏度差异的方法
JP2013066140A (ja) * 2011-08-31 2013-04-11 Sony Corp 撮像装置、および信号処理方法、並びにプログラム
US8891866B2 (en) * 2011-08-31 2014-11-18 Sony Corporation Image processing apparatus, image processing method, and program
ITRN20110067A1 (it) * 2011-09-19 2011-12-19 Ceramica Faetano S P A Pavimento ceramico a piastrelle.
JP5519083B2 (ja) * 2011-09-29 2014-06-11 富士フイルム株式会社 画像処理装置、方法、プログラムおよび撮像装置
JP5687608B2 (ja) * 2011-11-28 2015-03-18 オリンパス株式会社 画像処理装置、画像処理方法、および画像処理プログラム
JP5889049B2 (ja) * 2012-03-09 2016-03-22 オリンパス株式会社 画像処理装置、撮像装置及び画像処理方法
JP2013211603A (ja) 2012-03-30 2013-10-10 Sony Corp 撮像装置、撮像方法およびプログラム
DE112013004198T5 (de) * 2012-08-27 2015-05-07 Fujifilm Corporation Bildverarbeitungsvorrichtung, Verfahren, Programm, Aufzeichnungsmedium und Bildaufnahmevorrichtung
KR101744761B1 (ko) * 2012-11-30 2017-06-09 한화테크윈 주식회사 영상처리장치 및 방법
JP2014110507A (ja) * 2012-11-30 2014-06-12 Canon Inc 画像処理装置および画像処理方法
JP5830186B2 (ja) * 2013-02-05 2015-12-09 富士フイルム株式会社 画像処理装置、撮像装置、画像処理方法及びプログラム
US9224362B2 (en) * 2013-03-14 2015-12-29 Microsoft Technology Licensing, Llc Monochromatic edge geometry reconstruction through achromatic guidance
JP6263035B2 (ja) * 2013-05-17 2018-01-17 キヤノン株式会社 撮像装置
CN103810675B (zh) * 2013-09-09 2016-09-21 深圳市华星光电技术有限公司 图像超分辨率重构系统及方法
US9438866B2 (en) 2014-04-23 2016-09-06 Omnivision Technologies, Inc. Image sensor with scaled filter array and in-pixel binning
CN103996170B (zh) * 2014-04-28 2017-01-18 深圳市华星光电技术有限公司 一种具有超高解析度的图像边缘锯齿消除方法
JP6415113B2 (ja) * 2014-05-29 2018-10-31 オリンパス株式会社 撮像装置、画像処理方法
US9888198B2 (en) * 2014-06-03 2018-02-06 Semiconductor Components Industries, Llc Imaging systems having image sensor pixel arrays with sub-pixel resolution capabilities
US9344639B2 (en) * 2014-08-12 2016-05-17 Google Technology Holdings LLC High dynamic range array camera
JP5893713B1 (ja) * 2014-11-04 2016-03-23 オリンパス株式会社 撮像装置、撮像方法、処理プログラム
JP6508626B2 (ja) * 2015-06-16 2019-05-08 オリンパス株式会社 撮像装置、処理プログラム、撮像方法
US9959672B2 (en) * 2015-11-23 2018-05-01 Adobe Systems Incorporated Color-based dynamic sub-division to generate 3D mesh
CN105578005B (zh) * 2015-12-18 2018-01-19 广东欧珀移动通信有限公司 图像传感器的成像方法、成像装置和电子装置
CN105611258A (zh) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 图像传感器的成像方法、成像装置和电子装置
CN105611123B (zh) * 2015-12-18 2017-05-24 广东欧珀移动通信有限公司 成像方法、图像传感器、成像装置及电子装置
JP6711612B2 (ja) * 2015-12-21 2020-06-17 キヤノン株式会社 画像処理装置、画像処理方法、および撮像装置
US9883155B2 (en) * 2016-06-14 2018-01-30 Personify, Inc. Methods and systems for combining foreground video and background video using chromatic matching
US20180184066A1 (en) * 2016-12-28 2018-06-28 Intel Corporation Light field retargeting for multi-panel display
US9965865B1 (en) * 2017-03-29 2018-05-08 Amazon Technologies, Inc. Image data segmentation using depth data

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103531603A (zh) * 2013-10-30 2014-01-22 上海集成电路研发中心有限公司 一种cmos图像传感器
CN104168403A (zh) * 2014-06-27 2014-11-26 深圳市大疆创新科技有限公司 基于拜尔颜色滤波阵列的高动态范围视频录制方法和装置
WO2016122896A1 (en) * 2015-01-28 2016-08-04 Qualcomm Incorporated Graphics processing unit with bayer mapping
CN105578067A (zh) * 2015-12-18 2016-05-11 广东欧珀移动通信有限公司 图像生成方法、装置及终端设备
CN105611124A (zh) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 图像传感器、成像方法、成像装置及电子装置
CN106357967A (zh) * 2016-11-29 2017-01-25 广东欧珀移动通信有限公司 控制方法、控制装置和电子装置
CN106454289A (zh) * 2016-11-29 2017-02-22 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN106504218A (zh) * 2016-11-29 2017-03-15 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN106507019A (zh) * 2016-11-29 2017-03-15 广东欧珀移动通信有限公司 控制方法、控制装置、电子装置
CN106604001A (zh) * 2016-11-29 2017-04-26 广东欧珀移动通信有限公司 图像处理方法、图像处理装置、成像装置及电子装置

Also Published As

Publication number Publication date
ES2752655T3 (es) 2020-04-06
EP3328075A1 (en) 2018-05-30
US20180150933A1 (en) 2018-05-31
EP3328075B1 (en) 2019-08-28
US10438320B2 (en) 2019-10-08
US10249021B2 (en) 2019-04-02
CN106604001B (zh) 2018-06-29
CN106604001A (zh) 2017-04-26
US20190087934A1 (en) 2019-03-21

Similar Documents

Publication Publication Date Title
WO2018099011A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
WO2018099012A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
WO2018099010A1 (zh) 控制方法、控制装置和电子装置
WO2018099008A1 (zh) 控制方法、控制装置及电子装置
WO2018099007A1 (zh) 控制方法、控制装置及电子装置
TWI651582B (zh) 影像感測器、影像處理方法、成像裝置和行動終端
WO2018098984A1 (zh) 控制方法、控制装置、成像装置及电子装置
WO2018099005A1 (zh) 控制方法、控制装置及电子装置
TW201724842A (zh) 圖像傳感器及輸出方法、相位對焦方法、成像裝置和終端
WO2018098977A1 (zh) 图像处理方法、图像处理装置、成像装置、制造方法和电子装置
WO2018098982A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
WO2018099030A1 (zh) 控制方法和电子装置
WO2018099006A1 (zh) 控制方法、控制装置及电子装置
WO2018099031A1 (zh) 控制方法和电子装置
JP2020511022A (ja) デュアルコアフォーカシングイメージセンサ、そのフォーカシング制御方法、および電子装置
WO2018098981A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2018099009A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2017101451A1 (zh) 成像方法、成像装置及电子装置
WO2018098983A1 (zh) 图像处理方法及装置、控制方法及装置、成像及电子装置
WO2018098978A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2018196704A1 (zh) 双核对焦图像传感器及其对焦控制方法和成像装置
WO2018196703A1 (zh) 图像传感器、对焦控制方法、成像装置和移动终端
TW201724845A (zh) 圖像感測器、成像裝置、行動終端及成像方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17877035

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17877035

Country of ref document: EP

Kind code of ref document: A1