WO2018099030A1 - 控制方法和电子装置 - Google Patents
控制方法和电子装置 Download PDFInfo
- Publication number
- WO2018099030A1 WO2018099030A1 PCT/CN2017/087556 CN2017087556W WO2018099030A1 WO 2018099030 A1 WO2018099030 A1 WO 2018099030A1 CN 2017087556 W CN2017087556 W CN 2017087556W WO 2018099030 A1 WO2018099030 A1 WO 2018099030A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- image
- photosensitive
- array
- color
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000004458 analytical method Methods 0.000 claims abstract description 25
- 238000003384 imaging method Methods 0.000 claims description 28
- 238000012545 processing Methods 0.000 claims description 28
- 230000009467 reduction Effects 0.000 claims description 8
- 238000003705 background correction Methods 0.000 claims description 5
- 230000001131 transforming effect Effects 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 description 14
- 238000004364 calculation method Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 11
- 230000006870 function Effects 0.000 description 6
- 238000012546 transfer Methods 0.000 description 6
- 230000035945 sensitivity Effects 0.000 description 5
- 238000003491 array Methods 0.000 description 4
- 230000007423 decrease Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 102220124597 rs374821619 Human genes 0.000 description 1
- 102220078862 rs77460377 Human genes 0.000 description 1
- 102220062292 rs786203886 Human genes 0.000 description 1
- 102220178161 rs886049987 Human genes 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/48—Increasing resolution by shifting the sensor relative to the scene
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/672—Focus control based on electronic image sensor signals based on the phase difference signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/88—Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/46—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0135—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/77—Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/62—Detection or reduction of noise due to excess charges produced by the exposure, e.g. smear, blooming, ghost image, crosstalk or leakage between pixels
Definitions
- the present invention relates to image processing technologies, and in particular, to a control method and an electronic device.
- An existing image sensor includes an array of photosensitive pixel units and an array of filter cells disposed on the array of photosensitive pixel units, each array of filter cells covering a corresponding one of the photosensitive pixel units, each photosensitive pixel unit including a plurality of photosensitive pixels Pixel.
- Combining the images includes merging the pixel arrays, and the plurality of photosensitive pixels of the same photosensitive pixel unit are combined to output as corresponding merged pixels. In this way, the signal-to-noise ratio of the merged image can be improved, however, the resolution of the merged image is lowered.
- the image sensor may also be controlled to output a high-pixel patch image
- the patch image includes an original pixel array, and each photosensitive pixel corresponds to one original pixel.
- the pseudo-original image may include a pseudo-origin pixel arranged in a Bell array.
- the original image can be converted into an original true color image by image processing and saved. Interpolation calculations can improve the sharpness of true color images, but they are resource intensive and time consuming, resulting in longer shooting times and poor user experience.
- users tend to focus only on the sharpness in the depth of field in a true color image.
- Embodiments of the present invention provide a control method and an electronic device.
- a control method for controlling an electronic device comprising an imaging device and a display, the imaging device comprising an image sensor, the image sensor comprising an array of photosensitive pixel units and a filter disposed on the array of photosensitive pixel units An array of light sheet units, each of the array of filter cells covering a corresponding one of the photosensitive pixel units, each of the photosensitive pixel units comprising a plurality of photosensitive pixels, the control method comprising the steps of:
- the image sensor outputs a merged image
- the merged image includes a merged pixel array, and a plurality of photosensitive pixels of the same photosensitive pixel unit are combined and output as one of the merged pixels;
- the image sensor Controlling the image sensor to output a patch image, the patch image comprising image pixel units arranged in a predetermined array, the image pixel unit comprising a plurality of original pixels, each of the photosensitive pixel units corresponding to one of the image pixel units Each of the photosensitive pixels corresponds to one of the image pixels;
- Transforming the patch image into a pseudo original image by using a first interpolation algorithm where the pseudo original image includes an original pixel arranged in an array, and each of the photosensitive pixels corresponds to one of the pseudo original pixels, and the pseudo original pixel Include a current pixel, the original pixel includes an associated pixel corresponding to the current pixel, and converting the patch image into a pseudo original image by using a first interpolation algorithm includes the following steps:
- the pixel value of the associated pixel is used as the pixel value of the current pixel
- the pixel value of the current pixel is calculated by the first interpolation algorithm according to the pixel value of the associated pixel unit, where the image pixel unit includes the associated pixel a unit, the associated pixel unit having the same color as the current pixel and adjacent to the current pixel; and
- the pixel value of the current pixel is calculated by a second interpolation algorithm, and the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm.
- An electronic device comprising an imaging device, the imaging device comprising an image sensor, the image sensor comprising an array of photosensitive pixel units and an array of filter elements disposed on the array of photosensitive pixel units, each of the filters
- the unit covers one of the photosensitive pixel units, each of the photosensitive pixel units includes a plurality of photosensitive pixels; a display and a processor, the processor is configured to:
- the image sensor outputs a merged image
- the merged image includes a merged pixel array, and a plurality of photosensitive pixels of the same photosensitive pixel unit are combined and output as one of the merged pixels;
- the image sensor Controlling the image sensor to output a patch image, the patch image comprising image pixel units arranged in a predetermined array, the image pixel unit comprising a plurality of original pixels, each of the photosensitive pixel units corresponding to one of the image pixel units Each of the photosensitive pixels corresponds to one of the image pixels;
- Transforming the patch image into a pseudo original image by using a first interpolation algorithm where the pseudo original image includes an original pixel arranged in an array, and each of the photosensitive pixels corresponds to one of the pseudo original pixels, and the pseudo original pixel Include a current pixel, the original pixel includes an associated pixel corresponding to the current pixel, and converting the patch image into a pseudo original image by using a first interpolation algorithm includes the following steps:
- the pixel value of the associated pixel is used as the pixel value of the current pixel
- the pixel value of the current pixel is calculated by the first interpolation algorithm according to the pixel value of the associated pixel unit, where the image pixel unit includes the associated pixel a unit, the associated pixel unit having the same color as the current pixel and adjacent to the current pixel; and
- the pixel value of the current pixel is calculated by a second interpolation algorithm, and the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm.
- An electronic device includes a housing, a processor, a memory, a circuit board, a power supply circuit, and an imaging device, the circuit board being disposed inside a space enclosed by the housing, the processor and the memory being disposed at the a power circuit for powering various circuits or devices of the electronic device; the memory for storing executable program code; the processor reading an executable executable by the memory Program code to execute a program corresponding to the executable program code for executing the control method.
- a computer readable storage medium having instructions stored therein, the electronic device executing the control method when a processor of an electronic device executes the instructions.
- control method, the control device and the electronic device of the embodiment of the invention adopt different conversion modes for different regions inside and outside the depth of field range by identifying and judging the depth of field range, and output appropriate images to avoid the fixed conversion and output mode of the image sensor. It is resource-intensive and time-consuming, improving work efficiency, while at the same time satisfying users' image clarity.
- FIG. 1 is a schematic flow chart of a control method according to an embodiment of the present invention.
- FIG. 2 is a schematic diagram of functional modules of an electronic device according to an embodiment of the present invention.
- FIG. 3 is a block diagram of an image sensor according to an embodiment of the present invention.
- FIG. 4 is a circuit diagram of an image sensor according to an embodiment of the present invention.
- FIG. 5 is a schematic view of a filter unit according to an embodiment of the present invention.
- FIG. 6 is a schematic structural view of an image sensor according to an embodiment of the present invention.
- FIG. 7 is a schematic diagram showing a state of a merged image according to an embodiment of the present invention.
- Fig. 8 is a schematic diagram showing the state of a patch image according to an embodiment of the present invention.
- Fig. 9 is a schematic view showing the state of a control method according to an embodiment of the present invention.
- Fig. 10 is a schematic diagram showing the state of a control method according to an embodiment of the present invention.
- FIG. 11 is a flow chart showing a control method according to an embodiment of the present invention.
- FIG. 12 is a schematic diagram of functional blocks of a second computing unit according to an embodiment of the present invention.
- Fig. 13 is a schematic diagram showing the state of a control method according to an embodiment of the present invention.
- FIG. 14 is a flow diagram of a control method of some embodiments of the present invention.
- 15 is a schematic diagram of functional modules of a first conversion module in accordance with some embodiments of the present invention.
- 16 is a flow chart of a control method of some embodiments of the present invention.
- 17 is a schematic diagram of the state of a control method of some embodiments of the present invention.
- FIG. 18 is a block diagram of an electronic device in accordance with some embodiments of the present invention.
- a control method is used to control an electronic device.
- the electronic device includes an imaging device and a display.
- the image forming apparatus includes an image sensor including an array of photosensitive pixel units and an array of filter units disposed on the array of photosensitive pixel units, each of the filter units covering a corresponding one of the photosensitive pixel units, each of the photosensitive pixel units including a plurality of photosensitive pixels Pixel.
- the control method includes the steps:
- S70 Calculate a pixel value of the current pixel by using a second interpolation algorithm when the associated pixel is outside the depth of field region.
- the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm.
- step S60 includes the steps of:
- S64 determining, when the associated pixel is located in the depth of field region, whether the color of the current pixel is the same as the color of the associated pixel;
- a control device 100 is used to control an electronic device 1000.
- the electronic device 1000 further includes an imaging device and a display.
- the imaging device includes an image sensor 200.
- the control device 100 includes a first control module 110, a division module 120 and a calculation module 130, a merge module 140, a second control module 150, a first conversion module 160, and a second conversion module 170.
- the first conversion module 160 includes a first determining unit 162, a second determining unit 164, a first calculating unit 166, and a second calculating unit 168.
- the control method of the embodiment of the present invention may be implemented by the control device 100 of the embodiment of the present invention, and the image sensor 200 applicable to the electronic device 1000 and used to control the imaging device of the electronic device 1000 outputs an image.
- electronic device 1000 includes a cell phone or tablet.
- the imaging device includes a front camera or a rear camera.
- the image sensor 200 of the embodiment of the present invention includes a photosensitive pixel unit array 210 and a filter unit array 220 disposed on the photosensitive pixel unit array 210.
- the photosensitive pixel unit array 210 includes a plurality of photosensitive pixel units 210a, each of which includes a plurality of adjacent photosensitive pixels 212.
- Each of the photosensitive pixels 212 includes a photosensitive device 2121 and a transfer tube 2122, wherein the photosensitive device 2121 can be a photodiode, and the transfer tube 2122 can be a MOS transistor.
- the filter unit array 220 includes a plurality of filter units 220a, each of which covers a corresponding one of the photosensitive pixel units 210a.
- the filter cell array 220 includes a Bayer array, that is, the adjacent four filter cells 220a are respectively a red filter unit and a blue filter unit. And two green filter units.
- Each of the photosensitive pixel units 210a corresponds to the filter unit 220a of the same color. If one photosensitive pixel unit 210a includes a total of n adjacent photosensitive devices 2121, one filter unit 220a covers n in one photosensitive pixel unit 210a.
- the photosensitive device 2121, the filter unit 220a may be of an integral structure, or may be assembled and connected by n independent sub-filters.
- each of the photosensitive pixel units 210a includes four adjacent photosensitive pixels 212, and the adjacent two photosensitive pixels 212 collectively constitute one photosensitive pixel subunit 2120, and the photosensitive pixel subunit 2120 further includes a source follower.
- the photosensitive pixel unit 210a further includes an adder 213. among them, One end electrode of each of the transfer tubes 2122 of one photosensitive pixel subunit 2120 is connected to the cathode electrode of the corresponding photosensitive device 2121, and the other end of each transfer tube 2122 is commonly connected to the gate electrode of the source follower 2123, and The source electrode is connected to an analog to digital converter 2124 through a source follower 2123.
- the source follower 2122 may be a MOS transistor.
- the two photosensitive pixel subunits 2120 are connected to the adder 213 through respective source followers 2123 and analog to digital converters 2124.
- the adjacent four photosensitive devices 2121 of one photosensitive pixel unit 210a of the image sensor 200 of the embodiment of the present invention share a filter unit 220a of the same color, and each photosensitive device 2121 is connected to a transmission tube 2122.
- the adjacent two photosensitive devices 2121 share a source follower 2123 and an analog to digital converter 2124, and the adjacent four photosensitive devices 2121 share an adder 213.
- the adjacent four photosensitive devices 2121 are arranged in a 2*2 array.
- the two photosensitive devices 2121 in one photosensitive pixel subunit 2120 may be in the same column.
- the pixels may be combined to output a combined image.
- the photosensitive device 2121 is used to convert light into electric charge, and the generated electric charge is proportional to the light intensity, and the transfer tube 2122 is used to control the conduction or disconnection of the circuit according to the control signal.
- the source follower 2123 is used to convert the charge signal generated by the photosensitive device 2121 into a voltage signal.
- Analog to digital converter 2124 is used to convert the voltage signal into a digital signal.
- the adder 214 is for adding the two digital signals together for output for processing by the image processing module connected to the image sensor 200.
- the image sensor 200 of the embodiment of the present invention may merge 16M photosensitive pixels into 4M, or output a merged image, and the merged image includes a merged pixel arranged in a predetermined array.
- the plurality of photosensitive pixels 212 of the same photosensitive pixel unit 210a are combined and output as one combined pixel.
- each photosensitive pixel unit 210a includes four photosensitive pixels 212, that is, after combining, the size of the photosensitive pixels is equivalent. It has become 4 times the original size, which improves the sensitivity of the photosensitive pixels.
- the noise in the image sensor 200 is mostly random noise, it is possible for the photosensitive pixels before the combination to have noise in one or two pixels, and to combine the four photosensitive pixels into one large one. After the photosensitive pixel, the influence of the noise on the large pixel is reduced, that is, the noise is weakened, and the signal-to-noise ratio is improved.
- the resolution of the merged image will also decrease as the pixel value decreases.
- the patch image can be output through image processing.
- the photosensitive device 2121 is used to convert light into electric charge, and the generated electric charge is proportional to the light intensity, and the transfer tube 2122 is used to control the conduction or disconnection of the circuit according to the control signal.
- the source follower 2123 is used to convert the charge signal generated by the photosensitive device 2121 into a voltage signal.
- Analog to digital converter 2124 for voltage The signal is converted to a digital signal for processing by an image processing module coupled to image sensor 200.
- the image sensor 200 of the embodiment of the present invention can also maintain a 16M photosensitive pixel output, or an output patch image, and the patch image includes an image pixel unit, and an image pixel unit.
- the original pixel is arranged in a 2*2 array, the size of the original pixel is the same as the size of the photosensitive pixel, but since the filter unit 220a covering the adjacent four photosensitive devices 2121 is the same color, that is, although four The photosensitive device 2121 is respectively exposed, but the filter unit 220a covering the same color is the same. Therefore, the adjacent four original pixels of each image pixel unit are output in the same color, and the resolution of the image cannot be improved, and further processing is required. .
- the module receives processing to output a true color image.
- the color patch image is outputted separately for each photosensitive pixel at the time of output. Since the adjacent four photosensitive pixels have the same color, the four adjacent original pixels of one image pixel unit have the same color and are atypical Bayer arrays.
- the image processing module cannot directly process the atypical Bayer array, that is, when the image sensor 200 adopts the same image processing module, it is compatible with the two modes of true color image output, that is, the true color image output in the merge mode and In the color block mode, the true color image output needs to be converted into a color block image, or the image pixel unit of the atypical Bayer array is converted into a pixel arrangement of a typical Bayer array.
- the merged image is divided into M*N analysis regions, and the phase difference of each analysis region is analyzed. If the phase difference is close to 0, the analysis region is in the depth of field range. Since the depth of field is usually a range, that is, the analysis area within a certain phase difference range is within the depth of field. During the image shooting process, the part located in the depth of field can be clearly imaged, the user is more sensitive, and the part outside the depth of field cannot be clearly imaged, and the first interpolation algorithm is used to convert the color block image into the original image.
- the complexity of the algorithm includes time complexity and space complexity, that is, the memory resources occupied are large, and the calculation takes a long time, and only the high-resolution conversion processing is performed on the portion in the depth of field region.
- the outside of the depth of field area is simply processed or a second interpolation algorithm with a lower complexity is used to convert it into a typical Bayer array to output an image.
- the original image includes imitation original pixels arranged in a Bayer array.
- the pseudo original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel.
- the first interpolation algorithm is illustrated by using FIG. 10 as an example.
- the current pixels are R3'3' and R5'5', and the corresponding associated pixels are R33 and B55, respectively.
- the pixel values above and below should be broadly understood as the color attribute values of the pixel, such as color values.
- the associated pixel unit includes a plurality of, for example, four, original pixels in the image pixel unit that are the same color as the current pixel and are adjacent to the current pixel.
- the associated pixel corresponding to R5'5' is B55, which is adjacent to the image pixel unit where B55 is located and has the same color as R5'5'.
- the image pixel units in which the associated pixel unit is located are image pixel units in which R44, R74, R47, and R77 are located, and are not other red image pixel units that are spatially farther from the image pixel unit in which B55 is located.
- the red original pixels closest to B55 are R44, R74, R47 and R77, respectively, that is, the associated pixel unit of R55' is composed of R44, R74, R47 and R77, R5'5' and R44. , R74, R47 and R77 are the same color and adjacent.
- step S68 includes the following steps:
- S683 Calculate the pixel value of the current pixel according to the amount of the gradient and the weight.
- the second computing unit 168 includes a first computing subunit 1681, a second computing subunit 1682, and a third computing unit 1683.
- Step S681 can be implemented by the first computing sub-unit 1681
- step S682 can be implemented by the second computing sub-unit 1682
- step S683 can be implemented by the third computing sub-unit 1683.
- the first calculation sub-unit 1681 is used to calculate the amount of gradation in each direction of the associated pixel unit
- the second calculation sub-unit 1682 is used to calculate the weights in the respective directions of the associated pixel unit
- the third calculation sub-unit 1683 is used to The quantity and weight calculate the pixel value of the current pixel.
- the interpolation processing method is an energy gradation of the reference image in different directions, and the color corresponding to the current pixel is the same and the adjacent associated pixel unit is calculated by linear interpolation according to the gradation weight in different directions.
- the pixel value of the pixel in the direction in which the amount of change in energy is small, the reference specific gravity is large, and therefore, the weight at the time of interpolation calculation is large.
- R5'5' is interpolated from R44, R74, R47 and R77, and there are no original pixels of the same color in the horizontal and vertical directions, so the components of the color in the horizontal and vertical directions are first calculated from the associated pixel unit.
- the components in the horizontal direction are R45 and R75
- the components in the vertical direction are R54 and R57 which can be calculated by R44, R74, R47 and R77, respectively.
- R45 R44*2/3+R47*1/3
- R75 2/3*R74+1/3*R77
- R54 2/3*R44+1/3*R74
- R57 2/3 *R47+1/3*R77.
- the amount of gradation and the weight in the horizontal and vertical directions are respectively calculated, that is, the gradation amount in different directions according to the color is determined to determine the reference weights in different directions at the time of interpolation, and the weight is smaller in the direction of the gradation amount. Large, and in the direction of larger gradient, the weight is smaller.
- the gradient amount X1
- the gradient amount X2
- W1 X1/(X1+X2)
- W2 X2/(X1+X2) .
- R5'5' (2/3*R45+1/3*R75)*W2+(2/3*R54+1/3*R57)*W1. It can be understood that if X1 is greater than X2, W1 is greater than W2, so the weight in the horizontal direction is W2 when calculating, and the weight in the vertical direction is W1, and vice versa.
- the pixel value of the current pixel can be calculated according to the interpolation method.
- the original pixel can be converted into a pseudo original pixel arranged in a typical Bayer array, that is, the adjacent four 2*2 array of low-light pixels include a red low-light pixel. , two green low-light pixels and one blue low-light pixel.
- the manner of interpolation includes, but is not limited to, the manner of considering only the pixel values of the same color in the vertical and horizontal directions in the calculation disclosed in the embodiment, for example, the pixel values of other colors may also be referred to.
- the second interpolation algorithm is explained by using FIG. 13 as an example.
- the pixel values of R11, R12, R21, and R22 are all Ravg, and the pixel values of Gr31, Gr32, Gr41, and Gr42 are all Gravg, and the pixel values of Gb13, Gb14, Gb23, and Gb24 are all Gbavg, B33, B34, and B43.
- the pixel value of B44 is Bavg. Taking the current pixel B22 as an example, the associated pixel corresponding to the current pixel B22 is R22. Since the color of the current pixel B22 is different from the color of the associated pixel R22, the pixel value of the current pixel B22 should be the corresponding blue filter of the nearest neighbor.
- the pixel value is the value of any Bavg of B33, B34, B43, B44.
- the original pixel is converted into the original pixel in different manners, thereby converting the color block image into the original image, and the image sensor 200 adopts a special Bayer array structure filter to improve
- the image signal-to-noise ratio is used, and in the image processing process, the color block image is interpolated by interpolation, which improves the resolution and resolution of the image.
- different interpolation calculation methods are adopted, which reduces the memory resources and operation time consumed without affecting the image quality.
- the step includes before step S68:
- Step S46 includes steps:
- S69a Perform white balance compensation and restoration on the original image.
- the first conversion module 160 includes a white balance compensation unit 167a and a white balance compensation reduction unit 169a.
- Step S67a may be implemented by the white balance compensation unit 167a
- step S69a may be implemented by the white balance compensation restoration unit 169a.
- the white balance compensation unit 167a is used to perform white balance compensation on the color patch image
- white balance compensation is used to perform white balance compensation restoration on the original image.
- the red and blue pseudo-pixels in the process of converting a patch image into a low-bright image, often refer not only to the color of the original pixel of the channel whose color is the same, but also Refer to the color weight of the original pixel of the green channel. Therefore, white balance compensation is required before interpolation to eliminate the influence of white balance in the interpolation calculation. In order not to destroy the white balance of the patch image, it is necessary to perform white balance compensation reduction after the interpolation, and restore according to the gain values of red, green and blue in the compensation.
- step S68 includes steps:
- the first conversion module 160 includes a dead point compensation module 167b.
- Step S67b can be implemented by the dead point compensation module 167b.
- the dead pixel compensation module 167b is used to perform dead pixel compensation on the patch image.
- the image sensor 200 may have a dead pixel.
- the bad point usually does not always present the same color as the sensitivity changes, and the presence of the dead pixel will affect the image quality. Therefore, in order to ensure accurate interpolation, The effect of the dead point requires bad point compensation before interpolation.
- the original pixel may be detected.
- the pixel compensation may be performed according to the pixel value of the other original image of the image pixel unit in which it is located.
- step S46 includes steps:
- the first conversion module 160 includes a crosstalk compensation module 167c.
- Step S67c can be implemented by the crosstalk compensation module 167c.
- the crosstalk compensation module 167c is configured to perform crosstalk compensation on the patch image.
- the four photosensitive pixels in one photosensitive pixel unit cover the filter of the same color, and there may be a difference in sensitivity between the photosensitive pixels, so that the solid color region in the true color image converted by the low-light image will be Fixed spectral noise occurs, affecting the quality of the image. Therefore, it is necessary to perform crosstalk compensation on the patch image.
- setting the compensation parameter includes the following steps:
- the predetermined light environment may include, for example, an LED homogenizing plate, a color temperature of about 5000 K, and a brightness of about 1000 lux.
- the imaging parameters may include a gain value, a shutter value, and a lens position. After the relevant parameters are set, the crosstalk compensation parameters are acquired.
- a plurality of color patch images are acquired with the set imaging parameters in the set light environment, and merged into one patch image, thereby reducing the noise influence based on the single patch image as a basis for calibration.
- the crosstalk compensation is aimed at substantially calibrating the photosensitive pixels with different sensitivity to the same level by compensation.
- Gr_avg, Gr2/Gr_avg, Gr3/Gr_avg and Gr4/Gr_avg it can be understood that by calculating the ratio of the pixel value of each original pixel to the average pixel value of the image pixel unit, the deviation of each original pixel from the base value can be basically reflected. Four ratios are recorded and recorded as compensation parameters in the memory of the relevant device to compensate for each original pixel during imaging, thereby reducing crosstalk and improving image quality.
- a patch image is first acquired with the same light environment and imaging parameters, and the patch image is crosstalk compensated according to the calculated compensation parameter, and the compensated Gr'_avg, Gr'1/Gr'_avg is calculated. , Gr'2/Gr'_avg, Gr'3/Gr'_avg and Gr'4/Gr'_avg. According to the calculation result, it is judged whether the compensation parameter is accurate, and the judgment can be considered according to the macroscopic and microscopic perspectives.
- Microscopic means that a certain original pixel still has a large deviation after compensation, and is easily perceived by the user after imaging, while the macroscopic view is from a global angle, that is, when the total number of original pixels still having deviation after compensation is large, Even if the deviation of each original pixel is small, it is still perceived by the user as a whole. Therefore, it is sufficient to set a proportional threshold for the micro, and set a proportional threshold and a quantity threshold for the macro. In this way, the set crosstalk compensation parameters can be verified to ensure correct compensation parameters to reduce the impact of crosstalk on image quality.
- step S68 the method further includes the following steps:
- S69b Perform lens shading correction, demosaicing, noise reduction and edge sharpening on the original image.
- the first conversion module 160 includes a processing unit 169b.
- Step S69b may be implemented by the processing unit 169b, or the processing unit 169b may be configured to perform lens shading correction, demosaicing, noise reduction, and edge sharpening processing on the pseudo original image.
- the original pixel is arranged as a typical Bayer array, and the processing unit 169b can be used for processing, including lens shadow correction, demosaicing, noise reduction and edge sharpening. deal with.
- an electronic device 1000 includes a housing 300, a processor 400, a memory 500, a circuit board 600, a power supply circuit 700, and an imaging device.
- the imaging device includes an image sensor 200
- an image sensor 200 includes a photosensitive pixel unit array and a filter unit array disposed on the photosensitive pixel unit array, each filter unit array covering a corresponding one of the photosensitive pixel units, each photosensitive pixel unit including a plurality of photosensitive pixels
- the circuit board 600 is disposed Inside the space enclosed by the housing 300, the processor 400 and the memory 500 are disposed on the circuit board;
- the power supply circuit 700 is used to supply power to various circuits or devices of the electronic device 1000;
- the memory 500 is used to store executable program code;
- the program corresponding to the executable program code is executed by reading the executable program code stored in the memory 500 to implement the control method of any of the above-described embodiments of the present invention.
- the processor 400 is configured to perform the following steps:
- the merged image comprises a merged pixel array, and the plurality of photosensitive pixels of the same photosensitive pixel unit are combined and output as one merged pixel;
- the analysis area corresponding to the predetermined condition corresponding to the phase difference is merged into the depth of field area
- the patch image includes a predetermined array of image pixel units, the image pixel unit includes a plurality of original pixels, each photosensitive pixel unit corresponds to one image pixel unit, and each photosensitive pixel corresponds to one image pixel;
- the first interpolation algorithm converts the color block image into a pseudo original image, and the original image includes an array of original pixels, each photosensitive pixel corresponding to one original pixel, the original pixel includes a current pixel, and the original pixel includes the current pixel. Corresponding associated pixels;
- the pixel value of the associated pixel is taken as the pixel value of the current pixel
- the pixel value of the current pixel is calculated according to the pixel value of the associated pixel unit by using a first interpolation algorithm, where the image pixel unit includes an associated pixel unit, and the color of the associated pixel unit is the same as the current pixel. Adjacent to the current pixel; and
- the pixel value of the current pixel is calculated by the second interpolation algorithm, and the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm.
- the processor 400 is configured to perform the following steps:
- the processor 400 is configured to perform the following steps:
- the processor 400 is configured to perform the following steps:
- the processor 400 is configured to perform the following steps:
- Crosstalk compensation is performed on the patch image.
- the processor 400 is configured to perform the following steps:
- control method and the control device 100 is also applicable to the electronic device 1000 of the embodiment of the present invention, and details are not described herein again.
- the computer readable storage medium of the embodiment of the present invention has instructions stored therein.
- the processor 400 of the electronic device 1000 executes an instruction
- the electronic device 1000 executes the control method of the embodiment of the present invention, and the foregoing control method and control device 100
- the explanations are also applicable to the computer readable storage medium of the embodiments of the present invention, and are not described herein again.
- the electronic device 1000 and the computer readable storage medium use different conversion modes for different regions inside and outside the depth of field range, and output appropriate images to avoid image sensors by recognizing and judging the depth of field range.
- the fixed conversion and output modes are resource-intensive and time-consuming, improving work efficiency while satisfying the user's clarity of the image.
- first and second are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated.
- features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
- the meaning of "a plurality” is at least two, such as two, three, etc., unless specifically defined otherwise.
- a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
- computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
- the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
- portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
- multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
- a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
- each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
- the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
- the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
- the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Color Television Image Signal Generators (AREA)
Abstract
本发明公开了一种控制方法,用于控制电子装置(1000)。控制方法包括以下步骤:(S10)控制图像传感器输出合并图像;(S20)将合并图像划分成阵列排布的分析区域;(S30)计算每个分析区域的相位差;(S40)将相位差符合预定条件对应的分析区域归并为景深区域;(S50)控制图像传感器输出色块图像;(S60)利用第一插值算法将色块图像转化成仿原图像;和(S70)在关联像素位于景深区域外时,通过第二插值算法计算当前像素的像素值。
Description
相关申请的交叉引用
本申请基于申请号为201611079637.X,申请日为2016年11月29日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
本发明涉及图像处理技术,特别涉及一种控制方法和电子装置。
现有的一种图像传感器包括感光像素单元阵列和设置在感光像素单元阵列上的滤光片单元阵列,每个滤光片单元阵列覆盖对应一个感光像素单元,每个感光像素单元包括多个感光像素。工作时,可以控制图像传感器曝光输出合并图像,合并图像可以通过图像处理方法转化成合并真彩图像并保存下来。合并图像包括合并像素阵列,同一感光像素单元的多个感光像素合并输出作为对应的合并像素。如此,可以提高合并图像的信噪比,然而,合并图像的解析度降低。当然,也可以控制图像传感器曝光输出高像素的色块图像,色块图像包括原始像素阵列,每个感光像素对应一个原始像素。然而,由于同一滤光片单元对应的多个原始像素颜色相同,同样无法提高色块图像的解析度。因此,需要通过插值计算的方式将高像素色块图像转化成高像素的仿原图像,仿原图像可以包括呈贝尔阵列排布的仿原像素。仿原图像可以通过图像处理方法转化成仿原真彩图像并保存下来。插值计算可以提高真彩图像的清晰度,然而耗费资源且耗时,导致拍摄时间加长,用户体验差。另一方面,具体应用时,用户往往只关注真彩图像中的景深范围内的清晰度。
发明内容
本发明的实施例提供一种控制方法和电子装置。
一种控制方法,用于控制电子装置,所述电子装置包括成像装置和显示器,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制方法包括以下步骤:
控制所述图像传感器输出合并图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素;
将所述合并图像划分成阵列排布的分析区域;
计算每个所述分析区域的相位差;
将所述相位差符合预定条件对应的所述分析区域归并为景深区域;
控制所述图像传感器输出色块图像,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素单元对应一个所述图像像素单元,每个所述感光像素对应一个所述图像像素;和
利用第一插值算法将所述色块图像转化成仿原图像,所述仿原图像包括阵列排布的仿原像素,每个所述感光像素对应一个所述仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述利用第一插值算法将所述色块图像转化成仿原图像包括以下步骤:
判断所述关联像素是否位于所述景深区域内;
在所述关联像素位于所述景深区域内时判断所述当前像素的颜色与所述关联像素的颜色是否相同;
在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;
在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过所述第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
在所述关联像素位于所述景深区域外时,通过第二插值算法计算所述当前像素的像素值,所述第二插值算法的复杂度小于所述第一插值算法。
一种电子装置,包括成像装置,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素;显示器和处理器,所述处理器用于:
控制所述图像传感器输出合并图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素;
将所述合并图像划分成阵列排布的分析区域;
计算每个所述分析区域的相位差;
将所述相位差符合预定条件对应的所述分析区域归并为景深区域;
控制所述图像传感器输出色块图像,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素单元对应一个所述图像像素单元,每个所述感光像素对应一个所述图像像素;
利用第一插值算法将所述色块图像转化成仿原图像,所述仿原图像包括阵列排布的仿原像素,每个所述感光像素对应一个所述仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述利用第一插值算法将所述色块图像转化成仿原图像包括以下步骤:
判断所述关联像素是否位于所述景深区域内;
在所述关联像素位于所述景深区域内时判断所述当前像素的颜色与所述关联像素的颜色是否相同;
在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;
在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过所述第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
在所述关联像素位于所述景深区域外时,通过第二插值算法计算所述当前像素的像素值,所述第二插值算法的复杂度小于所述第一插值算法。
一种电子装置,包括壳体、处理器、存储器、电路板、电源电路和成像装置,所述电路板安置在所述壳体围成的空间内部,所述处理器和所述存储器设置在所述电路板上;所述电源电路,用于为所述电子装置的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行所述的控制方法。
一种计算机可读存储介质,具有存储于其中的指令,当电子装置的处理器执行所述指令时,所述电子装置执行所述的控制方法。
本发明实施方式的控制方法、控制装置和电子装置,通过对景深范围的识别和判断,对景深范围内外不同区域分别采用不同转化方式,并输出合适的图像,避免图像传感器固定的转化及输出模式耗费资源且耗时,提高工作效率,同时又能满足用户对图像清晰度。
本发明的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。
本发明的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本发明实施方式的控制方法的流程示意图。
图2是本发明实施方式的电子装置的功能模块示意图。
图3是本发明实施方式的图像传感器的模块示意图。
图4是本发明实施方式的图像传感器的电路示意图。
图5是本发明实施方式的滤光片单元的示意图
图6是本发明实施方式的图像传感器的结构示意图。
图7是本发明实施方式的合并图像的状态示意图。
图8是本发明实施方式的色块图像的状态示意图。
图9是本发明实施方式的控制方法的状态示意图。
图10是本发明实施方式的控制方法的状态示意图。
图11是本发明实施方式的控制方法的流程示意图。
图12是本发明实施方式的第二计算单元的功能模块示意图。
图13是本发明实施方式的控制方法的状态示意图。
图14是本发明某些实施方式的控制方法的流程示意图。
图15是本发明某些实施方式的第一转化模块的功能模块示意图。
图16是本发明某些实施方式的控制方法的流程示意图。
图17是本发明某些实施方式的控制方法的状态示意图。
图18是本发明某些实施方式的电子装置的模块示意图。
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本发明,而不能理解为对本发明的限制。
请参阅图1,本发明实施方式的控制方法,用于控制电子装置。电子装置包括成像装置和显示器。成像装置包括图像传感器,图像传感器包括感光像素单元阵列和设置在感光像素单元阵列上的滤光片单元阵列,每个滤光片单元覆盖对应一个感光像素单元,每个感光像素单元包括多个感光像素。控制方法包括步骤:
S10:控制所述图像传感器输出合并图像;
S20:将所述合并图像划分成阵列排布的分析区域;
S30:计算每个所述分析区域的相位差;
S40:将相位差符合预定条件对应的所述分析区域归并为景深区域;
S50:控制所述图像传感器输出色块图像;
S60:利用第一插值算法将色块图像转化成仿原图像;和
S70:在关联像素位于景深区域外时,通过第二插值算法计算当前像素的像素值。其中第二插值算法的复杂度小于第一插值算法。
其中,步骤S60包括步骤:
S62:判断关联像素是否位于景深区域内;
S64:在关联像素位于景深区域内时判断当前像素的颜色与关联像素的颜色是否相同;
S66:在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;
S68:在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值。
请参阅图2,本发明实施方式的控制装置100,用于控制电子装置1000,电子装置1000还包括成像装置和显示器。成像装置包括图像传感器200。控制装置100包括第一控制模块110、划分模块120和计算模块130、归并模块140、第二控制模块150、第一转化模块160和第二转化模块170。其中,第一转化模块160包括第一判断单元162、第二判断单元164、第一计算单元166和第二计算单元168。作为例子,本发明实施方式的控制在方法可以由本发明实施方式的控制装置100实现,可应用于电子装置1000并用于控制电子装置1000的成像装置的图像传感器200输出图像。
在一些示例中,电子装置1000包括手机或平板电脑。成像装置包括前置相机或后置相机。
请一并参阅图3至图6,本发明实施方式的图像传感器200包括感光像素单元阵列210和设置在感光像素单元阵列210上的滤光片单元阵列220。
进一步地,感光像素单元阵列210包括多个感光像素单元210a,每个感光像素单元210a包括多个相邻的感光像素212。每个感光像素212包括一个感光器件2121和一个传输管2122,其中,感光器件2121可以是光电二极管,传输管2122可以是MOS晶体管。
滤光片单元阵列220包括多个滤光片单元220a,每个滤光片单元220a覆盖对应一个感光像素单元210a。
具体地,在某些示例中,滤光片单元阵列220包括拜耳阵列,也即是说,相邻的四个滤光片单元220a分别为一个红色滤光片单元、一个蓝色滤光片单元和两个绿色滤光片单元。
每一个感光像素单元210a对应同一颜色的滤光片单元220a,若一个感光像素单元210a中一共包括n个相邻的感光器件2121,那么一个滤光片单元220a覆盖一个感光像素单元210a中的n个感光器件2121,该滤光片单元220a可以是一体构造,也可以由n个独立的子滤光片组装连接在一起。
在某些实施方式中,每个感光像素单元210a包括四个相邻的感光像素212,相邻两个感光像素212共同构成一个感光像素子单元2120,感光像素子单元2120还包括一个源极跟随器2123及一个模数转换器2124。感光像素单元210a还包括一个加法器213。其中,
一个感光像素子单元2120中的每个传输管2122的一端电极被连接到对应感光器件2121的阴极电极,每个传输管2122的另一端被共同连接至源极跟随器2123的闸极电极,并通过源极跟随器2123源极电极连接至一个模数转换器2124。其中,源极跟随器2122可以是MOS晶体管。两个感光像素子单元2120通过各自的源极跟随器2123及模数转换器2124连接至加法器213。
也即是说,本发明实施方式的图像传感器200的一个感光像素单元210a中相邻的四个感光器件2121共用一个同颜色的滤光片单元220a,每个感光器件2121对应连接一个传输管2122,相邻两个感光器件2121共用一个源极跟随器2123和一个模数转换器2124,相邻的四个感光器件2121共用一个加法器213。
进一步地,相邻的四个感光器件2121呈2*2阵列排布。其中,一个感光像素子单元2120中的两个感光器件2121可以处于同一列。
在成像时,当同一滤光片单元220a下覆盖的两个感光像素子单元2120或者说四个感光器件2121同时曝光时,可以对像素进行合并进而可输出合并图像。
具体地,感光器件2121用于将光照转化为电荷,且产生的电荷与光照强度成比例关系,传输管2122用于根据控制信号来控制电路的导通或断开。当电路导通时,源极跟随器2123用于将感光器件2121经光照产生的电荷信号转化为电压信号。模数转换器2124用于电压信号转换为数字信号。加法器214用于将两路数字信号相加共同输出,以供与图像传感器200相连的图像处理模块处理。
请参阅图7,以16M的图像传感器200举例来说,本发明实施方式的图像传感器200可以将16M的感光像素合并成4M,或者说,输出合并图像,合并图像包括预定阵列排布的合并像素,同一感光像素单元210a的多个感光像素212合并输出作为一个合并像素,在一些示例中,每个感光像素单元210a包括四个感光像素212,也即是说,合并后,感光像素的大小相当于变成了原来大小的4倍,从而提升了感光像素的感光度。此外,由于图像传感器200中的噪声大部分都是随机噪声,对于合并之前的感光像素的来说,有可能其中一个或两个像素中存在噪点,而在将四个感光像素合并成一个大的感光像素后,减小了噪点对该大像素的影响,也即是减弱了噪声,提高了信噪比。
但在感光像素大小变大的同时,由于像素值降低,合并图像的解析度也将降低。
在成像时,当同一滤光片单元220a下覆盖的四个感光器件2121依次曝光时,经过图像处理可以输出色块图像。
具体地,感光器件2121用于将光照转化为电荷,且产生的电荷与光照强度成比例关系,传输管2122用于根据控制信号来控制电路的导通或断开。当电路导通时,源极跟随器2123用于将感光器件2121经光照产生的电荷信号转化为电压信号。模数转换器2124用于电压
信号转换为数字信号,以供与图像传感器200相连的图像处理模块处理。
请参阅图8,以16M的图像传感器200举例来说,本发明实施方式的图像传感器200还可以保持16M的感光像素输出,或者说输出色块图像,色块图像包括图像像素单元,图像像素单元包括2*2阵列排布的原始像素,该原始像素的大小与感光像素大小相同,然而由于覆盖相邻四个感光器件2121的滤光片单元220a为同一颜色,也即是说,虽然四个感光器件2121分别曝光,但覆盖其的滤光片单元220a颜色相同,因此,输出的每个图像像素单元的相邻四个原始像素颜色相同,仍然无法提高图像的解析度,需要进行进一步的处理。
可以理解,合并图像在输出时,四个相邻的同色感光像素以合并像素输出,如此,合并图像中的四个相邻的合并像素仍可看作是典型的拜耳阵列,可以直接被图像处理模块接收进行处理以输出真彩图像。而色块图像在输出时每个感光像素分别输出,由于相邻四个感光像素颜色相同,因此,一个图像像素单元的四个相邻原始像素的颜色相同,是非典型的拜耳阵列。而图像处理模块无法对非典型拜耳阵列直接进行处理,也即是说,在图像传感器200采用同一图像处理模块时,为兼容两种模式的真彩图像输出即合并模式下的真彩图像输出及色块模式下的真彩图像输出,需对色块图像进行转化处理,或者说将非典型拜耳阵列的图像像素单元转化为典型拜耳阵列的像素排布。
请参阅图9,本发明实施方式,将合并图像划分为M*N个分析区域,并分析每个分析区域的相位差,若相位差接近0,说明该分析区域处于景深范围内。由于景深通常是一个范围,也即是说,在一定相位差值范围内的分析区域均处于景深范围内。在图像拍摄过程中,位于景深范围内的部分能够清晰成像,用户较为敏感,而位于景深范围外的部分则无法清晰成像,而利用第一插值算法将色块图像转化为仿原图像复杂度较高,一般地,算法的复杂度包括时间复杂度和空间复杂度,也即是说占用的内存资源较大,并且计算耗费时间较长,而只对景深区域内的部分进行高解析度转化处理,而景深区域外则进行简单处理或者说利用复杂度较低的第二插值算法,使其转化为典型拜耳阵列可以输出图像即可。
仿原图像包括呈拜耳阵列排布的仿原像素。仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素。
请参阅图10,以图10为例对第一插值算法进行说明,当前像素为R3’3’和R5’5’,对应的关联像素分别为R33和B55。
在获取当前像素R3’3’时,由于R33’与对应的关联像素R33的颜色相同,因此在转化时直接将R33的像素值作为R33’的像素值。
在获取当前像素R5’5’时,由于R5’5’与对应的关联像素B55的颜色不相同,显然不能直接将B55的像素值作为R5’5’的像素值,需要根据R5’5’的关联像素单元通过插值的方式
计算得到。
需要说明的是,以上及下文中的像素值应当广义理解为该像素的颜色属性数值,例如色彩值。
关联像素单元包括多个,例如4个,颜色与当前像素相同且与当前像素相邻的图像像素单元中的原始像素。
需要说明的是,此处相邻应做广义理解,以图10为例,R5’5’对应的关联像素为B55,与B55所在的图像像素单元相邻的且与R5’5’颜色相同的关联像素单元所在的图像像素单元分别为R44、R74、R47、R77所在的图像像素单元,而并非在空间上距离B55所在的图像像素单元更远的其他的红色图像像素单元。其中,与B55在空间上距离最近的红色原始像素分别为R44、R74、R47和R77,也即是说,R55’的关联像素单元由R44、R74、R47和R77组成,R5’5’与R44、R74、R47和R77的颜色相同且相邻。
请参阅图11,在某些实施方式中,步骤S68包括以下步骤:
S681:计算关联像素单元各个方向上的渐变量;
S682:计算关联像素单元各个方向上的权重;和
S683:根据渐变量及权重计算当前像素的像素值。
请参阅图12,在某些实施方式中,第二计算单元168包括第一计算子单元1681、第二计算子单元1682和第三计算单元1683。步骤S681可以由第一计算子单元1681实现,步骤S682可以由第二计算子单元1682实现,步骤S683可以由第三计算子单元1683实现。或者说,第一计算子单元1681用于计算关联像素单元各个方向上的渐变量,第二计算子单元1682用于计算关联像素单元各个方向上的权重,第三计算子单元1683用于根据渐变量及权重计算当前像素的像素值。
具体地,插值处理方式是参考图像在不同方向上的能量渐变,将与当前像素对应的颜色相同且相邻的关联像素单元依据在不同方向上的渐变权重大小,通过线性插值的方式计算得到当前像素的像素值。其中,在能量变化量较小的方向上,参考比重较大,因此,在插值计算时的权重较大。
在某些示例中,为方便计算,仅考虑水平和垂直方向。
R5’5’由R44、R74、R47和R77插值得到,而在水平和垂直方向上并不存在颜色相同的原始像素,因此需首根据关联像素单元计算在水平和垂直方向上该颜色的分量。其中,水平方向上的分量为R45和R75、垂直方向的分量为R54和R57可以分别通过R44、R74、R47和R77计算得到。
具体地,R45=R44*2/3+R47*1/3,R75=2/3*R74+1/3*R77,R54=2/3*R44+1/3*R74,R57=2/3*R47+1/3*R77。
然后,分别计算在水平和垂直方向的渐变量及权重,也即是说,根据该颜色在不同方向的渐变量,以确定在插值时不同方向的参考权重,在渐变量小的方向,权重较大,而在渐变量较大的方向,权重较小。其中,在水平方向的渐变量X1=|R45-R75|,在垂直方向上的渐变量X2=|R54-R57|,W1=X1/(X1+X2),W2=X2/(X1+X2)。
如此,根据上述可计算得到,R5’5’=(2/3*R45+1/3*R75)*W2+(2/3*R54+1/3*R57)*W1。可以理解,若X1大于X2,则W1大于W2,因此计算时水平方向的权重为W2,而垂直方向的权重为W1,反之亦反。
如此,可根据插值方式计算得到当前像素的像素值。依据上述对关联像素的处理方式,可将原始像素转化为呈典型拜耳阵列排布的仿原像素,也即是说,相邻的四个2*2阵列的低亮像素包括一个红色低亮像素,两个绿色低亮像素和一个蓝色低亮像素。
需要说明的是,插值的方式包括但不限于本实施例中公开的在计算时仅考虑垂直和水平两个方向相同颜色的像素值的方式,例如还可以参考其他颜色的像素值。
请参阅图13,以图13为例对第二插值算法进行解释说明,先计算图像像素单元中各个原始像素的像素值:Ravg=(R1+R2+R3+R4)/4、Gravg=(Gr1+Gr2+Gr3+Gr4)/4、Gbavg=(Gb1+Gb2+Gb3+Gb4)/4、Bavg=(B1+B2+B3+B4)/4。此时,R11、R12、R21、R22的像素值均为Ravg,Gr31、Gr32、Gr41、Gr42的像素值均为Gravg,Gb13、Gb14、Gb23、Gb24的像素值均为Gbavg,B33、B34、B43、B44的像素值均为Bavg。以当前像素B22为例,当前像素B22对应的关联像素为R22,由于当前像素B22的颜色与关联像素R22的颜色不同,因此当前像素B22的像素值应取最邻近的蓝色滤光片对应的像素值即取B33、B34、B43、B44中任一Bavg的值。
如此,针对不同情况的当前像素,采用不同方式的将原始像素转化为仿原像素,从而将色块图像转化为仿原图像,由于图像传感器200采用了特殊的拜耳阵列结构的滤光片,提高了图像信噪比,并且在图像处理过程中,通过插值方式对色块图像进行插值处理,提高了图像的分辨率及解析度。而针对景深区域内外不同区域,采用不同插值计算方式,在不影响图像质量的情况下,减少了消耗的内存资源及运算时间。
请参阅图14和图15,在某些实施方式中,步骤S68前包括步骤:
S67a:对色块图像做白平衡补偿;
步骤S46后包括步骤:
S69a:对仿原图像做白平衡补偿还原。
在某些实施方式中,第一转化模块160包括白平衡补偿单元167a和白平衡补偿还原单元169a。步骤S67a可以由白平衡补偿单元167a实现,步骤S69a可以由白平衡补偿还原单元169a实现。或者说,白平衡补偿单元167a用于对色块图像做白平衡补偿,白平衡补偿
还原单元169a用于对仿原图像做白平衡补偿还原。
具体地,在一些示例中,在将色块图像转化为低亮图像的过程中,在插值过程中,红色和蓝色仿原像素往往不仅参考与其颜色相同的通道的原始像素的颜色,还会参考绿色通道的原始像素的颜色权重,因此,在插值前需要进行白平衡补偿,以在插值计算中排除白平衡的影响。为了不破坏色块图像的白平衡,因此,在插值之后需要将仿原图像进行白平衡补偿还原,还原时根据在补偿中红色、绿色及蓝色的增益值进行还原。
如此,可排除在插值过程中白平衡的影响,并且能够使得插值后得到的仿原图像保持色块图像的白平衡。
请再次参阅图14和图15,在某些实施方式中,步骤S68前包括步骤:
S67b:对色块图像做坏点补偿。
在某些实施方式中,第一转化模块160包括坏点补偿模块167b。步骤S67b可以由坏点补偿模块167b实现。或者说,坏点补偿模块167b用于对色块图像做坏点补偿。
可以理解,受限于制造工艺,图像传感器200可能会存在坏点,坏点通常不随感光度变化而始终呈现同一颜色,坏点的存在将影响图像质量,因此,为保证插值的准确,不受坏点的影响,需要在插值前进行坏点补偿。
具体地,坏点补偿过程中,可以对原始像素进行检测,当检测到某一原始像素为坏点时,可根据其所在的图像像素单元的其他原始像的像素值进行坏点补偿。
如此,可排除坏点对插值处理的影响,提高图像质量。
请再次参阅图14和图15,在某些实施方式中,步骤S46前包括步骤:
S67c:对色块图像做串扰补偿。
在某些实施方式中,第一转化模块160包括串扰补偿模块167c。步骤S67c可以由串扰补偿模块167c实现。或者说,串扰补偿模块167c用于对色块图像做串扰补偿。
具体的,一个感光像素单元中的四个感光像素覆盖同一颜色的滤光片,而感光像素之间可能存在感光度的差异,以至于以低亮图像转化输出的真彩图像中的纯色区域会出现固定型谱噪声,影响图像的质量。因此,需要对色块图像进行串扰补偿。
请参阅图16,如上述解释说明,进行串扰补偿,需要在图像传感器200制造过程中设定补偿参数,并将串扰补偿的相关参数预置于成像装置的存储器中或装设成像装置的电子装置1000例如手机或平板电脑中。
在某些实施方式中,设定补偿参数包括以下步骤:
S671:提供预定光环境;
S672:设置成像装置的成像参数;
S673:拍摄多帧图像;
S674:处理多帧图像以获得串扰补偿参数;和
S675:将串扰补偿参数保存在所述图像处理装置内。
预定光环境例如可包括LED匀光板,5000K左右的色温,亮度1000勒克斯左右,成像参数可包括增益值,快门值及镜头位置。设定好相关参数后,进行串扰补偿参数的获取。
处理过程中,首先在设定的光环境中以设置好的成像参数,获取多张色块图像,并合并成一张色块图像,如此可减少以单张色块图像作为校准基础的噪声影响。
请参阅图17,以图17中的图像像素单元Gr为例,其包括Gr1、Gr2、Gr3和Gr4,串扰补偿目的在于将感光度可能存在差异的感光像素通过补偿基本校准至同一水平。该图像像素单元的平均像素值为Gr_avg=(Gr1+Gr2+Gr3+Gr4)/4,可基本表征这四个感光像素的感光度的平均水平,以此平均值作为基础值,分别计算Gr1/Gr_avg,Gr2/Gr_avg,Gr3/Gr_avg和Gr4/Gr_avg,可以理解,通过计算每一个原始像素的像素值与该图像像素单元的平均像素值的比值,可以基本反映每个原始像素与基础值的偏差,记录四个比值并作为补偿参数记录到相关装置的存储器中,以在成像时进行调取对每个原始像素进行补偿,从而减少串扰,提高图像质量。
通常,在设定串扰补偿参数后还应当验证所设定的参数是否准确。
验证过程中,首先以相同的光环境和成像参数获取一张色块图像,依据计算得到的补偿参数对该色块图像进行串扰补偿,计算补偿后的Gr’_avg、Gr’1/Gr’_avg、Gr’2/Gr’_avg、Gr’3/Gr’_avg和Gr’4/Gr’_avg。根据计算结果判断补偿参数是否准确,判断可根据宏观与微观两个角度考虑。微观是指某一个原始像素在补偿后仍然偏差较大,成像后易被使用者感知,而宏观则从全局角度,也即是在补偿后仍存在偏差的原始像素的总数目较多时,此时即便单独的每一个原始像素的偏差不大,但作为整体仍然会被使用者感知。因此,针对微观设置一个比例阈值即可,针对宏观需设置一个比例阈值和一个数量阈值。如此,可对设置的串扰补偿参数进行验证,确保补偿参数的正确,以减少串扰对图像质量的影响。
请参阅图14和图15,在某些实施方式中,步骤S68后还包括步骤:
S69b:对仿原图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
在某些实施方式中,第一转化模块160包括处理单元169b。步骤S69b可以由处理单元169b实现,或者说,处理单元169b用于对仿原图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
可以理解,将色块图像转化为仿原图像后,仿原像素排布为典型的拜耳阵列,可采用处理单元169b进行处理,处理过程中包括镜片阴影校正、去马赛克、降噪和边缘锐化处理。
请参阅图18,本发明实施方式的电子装置1000包括壳体300、处理器400、存储器500、电路板600、电源电路700和成像装置。其中,成像装置包括图像传感器200,图像传感器
200包括感光像素单元阵列和设置在感光像素单元阵列上的滤光片单元阵列,每个滤光片单元阵列覆盖对应一个感光像素单元,每个感光像素单元包括多个感光像素,电路板600安置在壳体300围成的空间内部,处理器400和存储器500设置在电路板上;电源电路700用于为电子装置1000的各个电路或器件供电;存储器500用于存储可执行程序代码;处理器400通过读取存储器500中存储的可执行程序代码来运行与可执行程序代码对应的程序以实现上述的本发明任一实施方式的控制方法。在此过程中,处理器400用于执行以下步骤:
控制图像传感器输出合并图像,合并图像包括合并像素阵列,同一感光像素单元的多个感光像素合并输出作为一个合并像素;
将合并图像划分成阵列排布的分析区域;
计算每个分析区域的相位差;
将相位差符合预定条件对应的分析区域归并为景深区域;
控制图像传感器输出色块图像,色块图像包括预定阵列排布的图像像素单元,图像像素单元包括多个原始像素,每个感光像素单元对应一个图像像素单元,每个感光像素对应一个图像像素;和
利用第一插值算法将色块图像转化成仿原图像,仿原图像包括阵列排布的仿原像素,每个感光像素对应一个仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素;
判断关联像素是否位于景深区域内;
在关联像素位于景深区域内时判断当前像素的颜色与关联像素的颜色是否相同;
在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;
在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;和
在关联像素位于景深区域外时,通过第二插值算法计算当前像素的像素值,第二插值算法的复杂度小于第一插值算法。
在某些实施方式中,处理器400用于执行以下步骤:
计算关联像素各个方向上的渐变量;
计算关联像素各个方向上的权重;和
根据渐变量及所述权重计算所述当前像素的像素值。
在某些实施方式中,处理器400用于执行以下步骤:
对色块图像做白平衡补偿;
对仿原图像做白平衡补偿还原。
在某些实施方式中,处理器400用于执行以下步骤:
对色块图像做坏点补偿。
在某些实施方式中,处理器400用于执行以下步骤:
对色块图像做串扰补偿。
在某些实施方式中,处理器400用于执行以下步骤:
对仿原图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
需要说明的是,前述对控制方法和控制装置100的解释说明也适用于本发明实施方式的电子装置1000,此处不再赘述。
本发明实施方式的计算机可读存储介质,具有存储于其中的指令,当电子装置1000的处理器400执行指令时,电子装置1000执行本发明实施方式的控制方法,前述对控制方法和控制装置100的解释说明也适用于本发明实施方式的计算机可读存储介质,此处不再赘述。
综上所述,本发明实施方式的电子装置1000和计算机可读存储介质,通过对景深范围的识别和判断,对景深范围内外不同区域分别采用不同转化方式,并输出合适的图像,避免图像传感器固定的转化及输出模式耗费资源且耗时,提高工作效率,同时又能满足用户对图像清晰度。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本发明的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,
包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。
Claims (20)
- 一种控制方法,用于控制电子装置,其特征在于,所述电子装置包括成像装置和显示器,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制方法包括以下步骤:控制所述图像传感器输出合并图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素;将所述合并图像划分成阵列排布的分析区域;计算每个所述分析区域的相位差;将所述相位差符合预定条件对应的所述分析区域归并为景深区域;控制所述图像传感器输出色块图像,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素单元对应一个所述图像像素单元,每个所述感光像素对应一个所述图像像素;利用第一插值算法将所述色块图像转化成仿原图像,所述仿原图像包括阵列排布的仿原像素,每个所述感光像素对应一个所述仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述利用第一插值算法将所述色块图像转化成仿原图像包括以下步骤:判断所述关联像素是否位于所述景深区域内;在所述关联像素位于所述景深区域内时判断所述当前像素的颜色与所述关联像素的颜色是否相同;在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过所述第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和在所述关联像素位于所述景深区域外时,通过第二插值算法计算所述当前像素的像素值,所述第二插值算法的复杂度小于所述第一插值算法。
- 如权利要求1所述的控制方法,其特征在于,所述预定阵列包括拜耳阵列。
- 如权利要求1所述的控制方法,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
- 如权利要求1所述的控制方法,其特征在于,所述根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值的步骤包括以下步骤:计算所述关联像素各个方向上的渐变量;计算所述关联像素各个方向上的权重;和根据所述渐变量及所述权重计算所述当前像素的像素值。
- 如权利要求1所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前包括以下步骤:对所述色块图像做白平衡补偿;所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤后包括以下步骤:对所述仿原图像做白平衡补偿还原。
- 如权利要求1所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前包括以下步骤:对所述色块图像做坏点补偿。
- 如权利要求1所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤前包括以下步骤:对所述色块图像做串扰补偿。
- 如权利要求1所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤后包括以下步骤:对所述仿原图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
- 一种电子装置,其特征在于,包括:成像装置,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素;显示器;和处理器,所述处理器用于:控制所述图像传感器输出合并图像,所述合并图像包括合并像素阵列,同一所述感光像素单元的多个感光像素合并输出作为一个所述合并像素;将所述合并图像划分成阵列排布的分析区域;计算每个所述分析区域的相位差;将所述相位差符合预定条件对应的所述分析区域归并为景深区域;控制所述图像传感器输出色块图像,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素单元对应一个所述图像像素单元,每个所述感光像素对应一个所述图像像素;利用第一插值算法将所述色块图像转化成仿原图像,所述仿原图像包括阵列排布的仿原像素,每个所述感光像素对应一个所述仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述利用第一插值算法将所述色块图像转化成仿原图像包括以下步骤:判断所述关联像素是否位于所述景深区域内;在所述关联像素位于所述景深区域内时判断所述当前像素的颜色与所述关联像素的颜色是否相同;在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过所述第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和在所述关联像素位于所述景深区域外时,通过第二插值算法计算所述当前像素的像素值,所述第二插值算法的复杂度小于所述第一插值算法。
- 如权利要求9所述的电子装置,其特征在于,所述预定阵列包括拜耳阵列。
- 如权利要求9所述的电子装置,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
- 如权利要求9所述的电子装置,其特征在于,所述处理器用于:计算所述关联像素各个方向上的渐变量;计算所述关联像素各个方向上的权重;和根据所述渐变量及所述权重计算所述当前像素的像素值。
- 如权利要求9所述的电子装置,其特征在于,所述处理器用于:对所述色块图像做白平衡补偿;和对所述仿原图像做白平衡补偿还原。
- 如权利要求9所述的电子装置,其特征在于,所述处理器用于:对所述色块图像做换点补偿。
- 如权利要求9所述的电子装置,其特征在于,所述处理器用于:对所述色块图像做串扰补偿。
- 如权利要求9所述的电子装置,其特征在于,所述处理器用于:对所述仿原图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
- 如权利要求9所述的电子装置,其特征在于,所述电子装置包括手机或平板电脑。
- 如权利要求9所述的电子装置,其特征在于,所述成像装置包括前置相机或后置相机。
- 一种电子装置,包括壳体、处理器、存储器、电路板、电源电路和成像装置,其特征在于,所述电路板安置在所述壳体围成的空间内部,所述处理器和所述存储器设置在所述电路板上;所述电源电路,用于为所述电子装置的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行如权利要求1至8中任一项所述的控制方法。
- 一种计算机可读存储介质,具有存储于其中的指令,当电子装置的处理器执行所述指令时,所述电子装置执行如权利要求1至8中任一项所述的控制方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611079637.X | 2016-11-29 | ||
CN201611079637.XA CN106507069B (zh) | 2016-11-29 | 2016-11-29 | 控制方法、控制装置及电子装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018099030A1 true WO2018099030A1 (zh) | 2018-06-07 |
Family
ID=58328140
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/087556 WO2018099030A1 (zh) | 2016-11-29 | 2017-06-08 | 控制方法和电子装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US10165205B2 (zh) |
EP (1) | EP3328078B1 (zh) |
CN (1) | CN106507069B (zh) |
ES (1) | ES2732973T3 (zh) |
WO (1) | WO2018099030A1 (zh) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106507068B (zh) | 2016-11-29 | 2018-05-04 | 广东欧珀移动通信有限公司 | 图像处理方法及装置、控制方法及装置、成像及电子装置 |
CN106507019B (zh) | 2016-11-29 | 2019-05-10 | Oppo广东移动通信有限公司 | 控制方法、控制装置、电子装置 |
CN106507069B (zh) | 2016-11-29 | 2018-06-05 | 广东欧珀移动通信有限公司 | 控制方法、控制装置及电子装置 |
CN106791477B (zh) * | 2016-11-29 | 2019-07-19 | Oppo广东移动通信有限公司 | 图像处理方法、图像处理装置、成像装置及制造方法 |
CN109295654B (zh) * | 2018-11-30 | 2020-11-17 | 苏州依唯森电器有限公司 | 洗衣机面板使能机构 |
EP3823269A1 (en) * | 2019-11-12 | 2021-05-19 | Axis AB | Color image reconstruction |
CN112243095B (zh) * | 2020-09-29 | 2023-07-25 | 格科微电子(上海)有限公司 | 像素合成模式下的pd像素的读取方法及装置、存储介质、图像采集设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6829008B1 (en) * | 1998-08-20 | 2004-12-07 | Canon Kabushiki Kaisha | Solid-state image sensing apparatus, control method therefor, image sensing apparatus, basic layout of photoelectric conversion cell, and storage medium |
CN104280803A (zh) * | 2013-07-01 | 2015-01-14 | 全视科技有限公司 | 彩色滤光片阵列、彩色滤光片阵列设备及图像传感器 |
CN105120248A (zh) * | 2015-09-14 | 2015-12-02 | 北京中科慧眼科技有限公司 | 像素阵列及相机传感器 |
CN105592303A (zh) * | 2015-12-18 | 2016-05-18 | 广东欧珀移动通信有限公司 | 成像方法、成像装置及电子装置 |
CN106454289A (zh) * | 2016-11-29 | 2017-02-22 | 广东欧珀移动通信有限公司 | 控制方法、控制装置及电子装置 |
CN106507069A (zh) * | 2016-11-29 | 2017-03-15 | 广东欧珀移动通信有限公司 | 控制方法、控制装置及电子装置 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006140594A (ja) | 2004-11-10 | 2006-06-01 | Pentax Corp | デジタルカメラ |
JP5446076B2 (ja) * | 2007-07-17 | 2014-03-19 | 株式会社ニコン | デジタルカメラ |
US7745779B2 (en) | 2008-02-08 | 2010-06-29 | Aptina Imaging Corporation | Color pixel arrays having common color filters for multiple adjacent pixels for use in CMOS imagers |
WO2012073728A1 (ja) * | 2010-11-30 | 2012-06-07 | 富士フイルム株式会社 | 撮像装置及びその合焦位置検出方法 |
CN103430073B (zh) * | 2011-03-31 | 2014-12-31 | 富士胶片株式会社 | 摄像装置及摄像装置的控制方法 |
JP5818514B2 (ja) * | 2011-05-27 | 2015-11-18 | キヤノン株式会社 | 画像処理装置および画像処理方法、プログラム |
JP2013066146A (ja) | 2011-08-31 | 2013-04-11 | Sony Corp | 画像処理装置、および画像処理方法、並びにプログラム |
US20130202191A1 (en) * | 2012-02-02 | 2013-08-08 | Himax Technologies Limited | Multi-view image generating method and apparatus using the same |
WO2014118868A1 (ja) | 2013-01-30 | 2014-08-07 | パナソニック株式会社 | 撮像装置及び固体撮像装置 |
US20140267701A1 (en) * | 2013-03-12 | 2014-09-18 | Ziv Aviv | Apparatus and techniques for determining object depth in images |
EP2887642A3 (en) * | 2013-12-23 | 2015-07-01 | Nokia Corporation | Method, apparatus and computer program product for image refocusing for light-field images |
DE102014209197B4 (de) * | 2014-05-15 | 2024-09-19 | Continental Autonomous Mobility Germany GmbH | Vorrichtung und Verfahren zum Erkennen von Niederschlag für ein Kraftfahrzeug |
US9479695B2 (en) | 2014-07-31 | 2016-10-25 | Apple Inc. | Generating a high dynamic range image using a temporal filter |
CN105611124B (zh) * | 2015-12-18 | 2017-10-17 | 广东欧珀移动通信有限公司 | 图像传感器、成像方法、成像装置及终端 |
CN105590939B (zh) * | 2015-12-18 | 2019-07-19 | Oppo广东移动通信有限公司 | 图像传感器及输出方法、相位对焦方法、成像装置和终端 |
CN105791683B (zh) * | 2016-02-29 | 2019-02-12 | Oppo广东移动通信有限公司 | 控制方法、控制装置及电子装置 |
-
2016
- 2016-11-29 CN CN201611079637.XA patent/CN106507069B/zh not_active Expired - Fee Related
-
2017
- 2017-06-08 WO PCT/CN2017/087556 patent/WO2018099030A1/zh active Application Filing
- 2017-10-13 US US15/783,203 patent/US10165205B2/en active Active
- 2017-11-03 EP EP17199860.2A patent/EP3328078B1/en active Active
- 2017-11-03 ES ES17199860T patent/ES2732973T3/es active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6829008B1 (en) * | 1998-08-20 | 2004-12-07 | Canon Kabushiki Kaisha | Solid-state image sensing apparatus, control method therefor, image sensing apparatus, basic layout of photoelectric conversion cell, and storage medium |
CN104280803A (zh) * | 2013-07-01 | 2015-01-14 | 全视科技有限公司 | 彩色滤光片阵列、彩色滤光片阵列设备及图像传感器 |
CN105120248A (zh) * | 2015-09-14 | 2015-12-02 | 北京中科慧眼科技有限公司 | 像素阵列及相机传感器 |
CN105592303A (zh) * | 2015-12-18 | 2016-05-18 | 广东欧珀移动通信有限公司 | 成像方法、成像装置及电子装置 |
CN106454289A (zh) * | 2016-11-29 | 2017-02-22 | 广东欧珀移动通信有限公司 | 控制方法、控制装置及电子装置 |
CN106507069A (zh) * | 2016-11-29 | 2017-03-15 | 广东欧珀移动通信有限公司 | 控制方法、控制装置及电子装置 |
Also Published As
Publication number | Publication date |
---|---|
CN106507069B (zh) | 2018-06-05 |
CN106507069A (zh) | 2017-03-15 |
EP3328078B1 (en) | 2019-06-05 |
EP3328078A1 (en) | 2018-05-30 |
US20180152648A1 (en) | 2018-05-31 |
ES2732973T3 (es) | 2019-11-26 |
US10165205B2 (en) | 2018-12-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018099010A1 (zh) | 控制方法、控制装置和电子装置 | |
WO2018099030A1 (zh) | 控制方法和电子装置 | |
WO2018099008A1 (zh) | 控制方法、控制装置及电子装置 | |
WO2018099005A1 (zh) | 控制方法、控制装置及电子装置 | |
WO2018099012A1 (zh) | 图像处理方法、图像处理装置、成像装置及电子装置 | |
WO2018099007A1 (zh) | 控制方法、控制装置及电子装置 | |
WO2018098977A1 (zh) | 图像处理方法、图像处理装置、成像装置、制造方法和电子装置 | |
TWI651582B (zh) | 影像感測器、影像處理方法、成像裝置和行動終端 | |
WO2018099011A1 (zh) | 图像处理方法、图像处理装置、成像装置及电子装置 | |
WO2018099006A1 (zh) | 控制方法、控制装置及电子装置 | |
TWI662842B (zh) | 圖像傳感器及輸出方法、相位對焦方法、成像裝置和行動裝置 | |
WO2018099031A1 (zh) | 控制方法和电子装置 | |
WO2018098982A1 (zh) | 图像处理方法、图像处理装置、成像装置及电子装置 | |
TWI636314B (zh) | 雙核對焦圖像感測器及其對焦控制方法、成像裝置和行動終端 | |
WO2018099009A1 (zh) | 控制方法、控制装置、电子装置和计算机可读存储介质 | |
WO2018098984A1 (zh) | 控制方法、控制装置、成像装置及电子装置 | |
WO2018098978A1 (zh) | 控制方法、控制装置、电子装置和计算机可读存储介质 | |
WO2018098981A1 (zh) | 控制方法、控制装置、电子装置和计算机可读存储介质 | |
WO2018098983A1 (zh) | 图像处理方法及装置、控制方法及装置、成像及电子装置 | |
CN102870404B (zh) | 摄像设备及其暗电流校正方法 | |
WO2017101451A1 (zh) | 成像方法、成像装置及电子装置 | |
WO2018196704A1 (zh) | 双核对焦图像传感器及其对焦控制方法和成像装置 | |
CN102780888A (zh) | 图像处理装置、图像处理方法以及电子照相机 | |
US20230217126A1 (en) | Imaging apparatus, imaging method | |
JP5968145B2 (ja) | 画像処理装置及びその制御方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17876007 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17876007 Country of ref document: EP Kind code of ref document: A1 |