WO2023087908A1 - 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质 - Google Patents

对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质 Download PDF

Info

Publication number
WO2023087908A1
WO2023087908A1 PCT/CN2022/120545 CN2022120545W WO2023087908A1 WO 2023087908 A1 WO2023087908 A1 WO 2023087908A1 CN 2022120545 W CN2022120545 W CN 2022120545W WO 2023087908 A1 WO2023087908 A1 WO 2023087908A1
Authority
WO
WIPO (PCT)
Prior art keywords
phase
array
pixel
phase information
size
Prior art date
Application number
PCT/CN2022/120545
Other languages
English (en)
French (fr)
Inventor
杨鑫
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2023087908A1 publication Critical patent/WO2023087908A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/20Filters
    • G02B5/201Filters in the form of arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/75Circuitry for providing, modifying or processing image signals from the pixel array

Definitions

  • the present application relates to the technical field of image processing, and in particular to a focus control method, device, image sensor, electronic equipment, and computer-readable storage medium.
  • phase detection auto focus English: phase detection auto focus; short: PDAF.
  • the traditional phase detection autofocus is mainly based on the RGB pixel array to calculate the phase difference, and then control the motor based on the phase difference, and then the motor drives the lens to move to a suitable position for focusing, so that the subject is imaged on the focal plane.
  • Embodiments of the present application provide a focus control method, device, electronic device, image sensor, and computer-readable storage medium, which can improve focus accuracy.
  • an image sensor includes a pixel array and a filter array, the filter array includes a minimum repeating unit, the minimum repeating unit includes a plurality of filter groups, the filter
  • the optical sheet group includes a color filter and a panchromatic filter; the color filter has a narrower spectral response than the panchromatic filter, and the color filter and the panchromatic filter
  • the pixel array includes a plurality of panchromatic pixel groups and a plurality of color pixel groups, each of the panchromatic pixel groups corresponds to the panchromatic filter, and each of the color pixel groups corresponds to the color filter
  • the panchromatic pixel group and the color pixel group both include 9 pixels, the pixels of the pixel array correspond to the sub-filters of the filter array, and each pixel includes an array of At least two sub-pixels, each corresponding to a photosensitive element.
  • a focus control method which is applied to the image sensor as described above, and the method includes:
  • phase information output mode adapted to the light intensity of the current shooting scene; wherein, in different phase information output modes, the sizes of the output phase arrays are different;
  • phase information output mode output a phase array corresponding to the pixel array; wherein, the phase array includes phase information corresponding to a target pixel in the pixel array;
  • the phase difference of the pixel array is calculated based on the phase array, and focus control is performed according to the phase difference.
  • a focus control device which is applied to the image sensor as described above, and the device includes:
  • a phase information output mode determination module configured to determine a phase information output mode adapted to the light intensity of the current shooting scene according to the light intensity of the current shooting scene; wherein, in different phase information output modes, the output phase The arrays are of different sizes;
  • a phase array output module configured to output a phase array corresponding to the pixel array according to the phase information output mode; wherein, the phase array includes phase information corresponding to a target pixel in the pixel array;
  • a focus control module configured to calculate a phase difference of the pixel array based on the phase array, and perform focus control according to the phase difference.
  • an electronic device including a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor executes the focus control as described above The operation of the method.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the operations of the method as described above are implemented.
  • a computer program product including computer programs/instructions, characterized in that, when the computer program/instructions are executed by a processor, operations of the focusing control method as described above are implemented.
  • Fig. 1 is a schematic structural diagram of an electronic device in an embodiment
  • FIG. 2 is a schematic diagram of the principle of phase detection autofocus
  • 3 is a schematic diagram of setting phase detection pixels in pairs among the pixels included in the image sensor
  • Fig. 4 is an exploded schematic diagram of an image sensor in an embodiment
  • Fig. 5 is a schematic diagram of connection between a pixel array and a readout circuit in an embodiment
  • Figure 6 is a schematic diagram of the arrangement of the smallest repeating unit in an embodiment
  • Figure 7 is a schematic diagram of the arrangement of the smallest repeating unit in another embodiment
  • FIG. 8 is a flowchart of a focus control method in an embodiment
  • Fig. 9 is a flowchart of a method for determining a phase information output mode adapted to the light intensity of the current shooting scene according to the target light intensity range in an embodiment
  • Figure 10 is a flowchart of a method for generating a full-scale phased array in one embodiment
  • Figure 11 is a schematic diagram of generating a full-scale phase array in one embodiment
  • Fig. 12 is a flow chart of a method for generating a phased array of a first size in one embodiment
  • FIG. 13 is a schematic diagram of generating a first-size phase array in another embodiment
  • Fig. 14 is a flow chart of a method for generating a phased array of a second size in one embodiment
  • Fig. 15 is a schematic diagram of generating a second-size phase array in one embodiment
  • Fig. 16 is a schematic diagram of generating a third-size phase array in an embodiment
  • Fig. 17 is a structural block diagram of a focus control device in an embodiment
  • Fig. 18 is a structural block diagram of the phase array output module in Fig. 17;
  • Fig. 19 is a schematic diagram of the internal structure of an electronic device in one embodiment.
  • first, second, third and the like used in this application may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another element.
  • a first dimension could be termed a second dimension, and, similarly, a second dimension could be termed a first dimension, without departing from the scope of the present application.
  • Both the first size and the second size are sizes, but they are not the same size.
  • the first preset threshold may be referred to as a second preset threshold, and similarly, the second preset threshold may be referred to as a first preset threshold. Both the first preset threshold and the second preset threshold are preset thresholds, but they are not the same preset threshold.
  • FIG. 1 is a schematic diagram of an application environment of a focus control method in an embodiment.
  • the application environment includes an electronic device 100 .
  • the electronic device 100 includes an image sensor, the image sensor includes a pixel array, and the electronic device determines a phase information output mode adapted to the light intensity of the current shooting scene according to the light intensity of the current shooting scene; wherein, in different phase information output modes, The size of the output phase array is different; according to the phase information output mode, the phase array corresponding to the pixel array is output; wherein, the phase array includes the phase information corresponding to the pixel array; the phase difference of the pixel array is calculated based on the phase array, and according to the phase difference Performs focus control.
  • electronic devices can be mobile phones, tablet computers, PDA (Personal Digital Assistant, personal digital assistant), wearable devices (smart bracelets, smart watches, smart glasses, smart gloves, smart socks, smart belts, etc.), VR (virtual reality, virtual reality) devices, smart homes, driverless cars and other terminal devices with image processing functions.
  • PDA Personal Digital Assistant, personal digital assistant
  • wearable devices smart bracelets, smart watches, smart glasses, smart gloves, smart socks, smart belts, etc.
  • VR virtual reality, virtual reality
  • smart homes driverless cars and other terminal devices with image processing functions.
  • the electronic device 100 includes a camera 20 , a processor 30 and a casing 40 .
  • Both the camera 20 and the processor 30 are arranged in the casing 40, and the casing 40 can also be used to install functional modules such as a power supply device and a communication device of the terminal 100, so that the casing 40 provides dustproof, dropproof, waterproof, etc. for the functional modules.
  • the camera 20 may be a front camera, a rear camera, a side camera, an under-screen camera, etc., which is not limited here.
  • the camera 20 includes a lens and an image sensor 21. When the camera 20 captures an image, light passes through the lens and reaches the image sensor 21.
  • the image sensor 21 is used to convert the light signal irradiated on the image sensor 21 into an electrical signal.
  • Fig. 2 is a schematic diagram of the principle of phase detection auto focus (PDAF).
  • M1 is the position of the image sensor when the imaging device is in the in-focus state, wherein the in-focus state refers to a state of successful focus. If the image sensor is located at the M1 position, the imaging light g reflected from the object W to the lens Lens in different directions converges on the image sensor, that is, the imaging light g reflected from the object W to the lens Lens in different directions is in the image The image is formed at the same position on the sensor, and at this time, the image of the image sensor is clear.
  • M2 and M3 are the possible positions of the image sensor when the imaging device is not in focus.
  • the image sensor if the image sensor is located at the M2 position or the M3 position, the reflections from the object W to the lens Lens in different directions The imaging ray g will be imaged at different positions.
  • the imaging light g reflected by the object W to the lens Lens in different directions will be imaged at position A and position B respectively; if the image sensor is at the position M3, the imaging light g reflected by the object W will The imaging rays g in different directions of the lens Lens respectively form images at positions C and D, and at this time, the image sensor images are not clear.
  • the difference in position of the image formed by the imaging rays entering the lens from different directions in the image sensor can be obtained, for example, as shown in Figure 2, the difference between position A and position B can be obtained, or, Obtain the difference between position C and position D; after obtaining the difference in position of the image formed by the imaging light that enters the lens from different directions in the image sensor, it can be based on the difference and the difference between the lens and the image sensor in the camera Geometric relationship, the defocus distance is obtained.
  • the so-called defocus distance refers to the distance between the current position of the image sensor and the position where the image sensor should be in the in-focus state; the imaging device can focus according to the obtained defocus distance.
  • the calculated PD value is 0.
  • the larger the calculated value the farther the distance from the focal point is, and the smaller the value, the closer the focal point is.
  • phase detection pixel points can be set in pairs among the pixel points included in the image sensor.
  • the image sensor can be provided with phase detection pixel point pairs (hereinafter referred to as pixel point pairs) A Pixel pair B and pixel pair C.
  • pixel point pairs phase detection pixel point pairs
  • one phase detection pixel performs left shielding (English: Left Shield)
  • the other phase detection pixel performs right shielding (English: Right Shield).
  • the imaging beam can be divided into left and right parts, and the phase difference can be obtained by comparing the images formed by the left and right parts of the imaging beam.
  • the image sensor includes a pixel array and a filter array
  • the filter array includes a minimum repeating unit
  • the minimum repeating unit includes a plurality of filter groups
  • the filter groups include color filters and Panchromatic filter
  • the color filter is arranged in the first diagonal direction in the filter group
  • the panchromatic filter is arranged in the second diagonal direction, the first diagonal direction and the second diagonal direction
  • the line direction is different
  • the color filter has a narrower spectral response than the panchromatic filter
  • both the color filter and the panchromatic filter include 9 sub-filters arranged in an array
  • the pixel array includes a plurality of panchromatic pixel groups and a plurality of color pixel groups, each panchromatic pixel group corresponds to a panchromatic filter, and each color pixel group corresponds to a color filter; the panchromatic pixel group and the color pixel group Each includes 9 pixels, and the pixels of the pixel array are arranged corresponding to the sub-filters of the filter array, and each pixel includes at least two sub-pixels arranged in an array, and each sub-pixel corresponds to a photosensitive element.
  • the image sensor 21 includes a microlens array 22 , a filter array 23 , and a pixel array 24 .
  • the microlens array 22 includes a plurality of microlenses 221, the microlenses 221, the sub-filters in the filter array 23, and the pixels in the pixel array 24 are set in one-to-one correspondence, and the microlenses 221 are used to gather the incident light.
  • the collected light will pass through the corresponding sub-filter, and then projected onto the pixel, and be received by the corresponding pixel, and the pixel converts the received light into an electrical signal.
  • the filter array 23 includes a plurality of minimal repeating units 231 .
  • the minimum repeating unit 231 may include a plurality of filter sets 232 .
  • Each filter set 232 includes a panchromatic filter 233 and a color filter 234 having a narrower spectral response than the panchromatic filter 233 .
  • Each panchromatic filter 233 includes 9 sub-filters 2331
  • each color filter 234 includes 9 sub-filters 2341 .
  • Different color filters 234 are also included in different filter sets.
  • the colors corresponding to the wavelength bands of the transmitted light of the color filters 234 of the filter sets 232 in the minimum repeating unit 231 include color a, color b and/or color c.
  • the color corresponding to the wavelength band of the transmitted light of the color filter 234 of the filter group 232 includes color a, color b and color c, or color a, color b or color c, or color a and color b, or color b and color c, or color a and color c.
  • the color a is red
  • the color b is green
  • the color c is blue, or for example, the color a is magenta, the color b is cyan, and the color c is yellow, etc., which are not limited here.
  • the width of the wavelength band of the light transmitted by the color filter 234 is smaller than the width of the wavelength band of the light transmitted by the panchromatic filter 233, for example, the wavelength band of the light transmitted by the color filter 234 It can correspond to the wavelength band of red light, the wavelength band of green light, or the wavelength band of blue light.
  • the wavelength band of the light transmitted by the panchromatic filter 233 is the wavelength band of all visible light, that is to say, the color filter 234 only allows specific color light
  • the panchromatic filter 233 can pass light of all colors.
  • the wavelength band of the light transmitted by the color filter 234 may also correspond to the wavelength band of other colored light, such as magenta light, purple light, cyan light, yellow light, etc., which is not limited here.
  • the ratio of the number of color filters 234 to the number of panchromatic filters 233 in the filter set 232 may be 1:3, 1:1 or 3:1. For example, if the ratio of the number of color filters 234 to the number of panchromatic filters 233 is 1:3, then the number of color filters 234 is 1, and the number of panchromatic filters 233 is 3.
  • the number of color filters 233 is large, compared with the traditional situation of only color filters, more phase information can be obtained through the panchromatic filters 233 in dark light, so the focusing quality is better; or, The ratio of the quantity of the color filter 234 and the quantity of the panchromatic filter 233 is 1:1, then the quantity of the color filter 234 is 2, and the quantity of the panchromatic filter 233 is 2, now both can obtain At the same time of better color performance, more phase information can be obtained through the panchromatic filter 233 in dark light, so the focus quality is also better; or, the number of color filters 234 and the panchromatic filter The ratio of the number of 233 is 3:1, then the number of color filters 234 is 3, and the number of panchromatic filters 233 is 1. At this time, better color performance can be obtained, and the dark light can also be improved in the same way. focus quality.
  • the pixel array 24 includes a plurality of pixels, and the pixels of the pixel array 24 are arranged corresponding to the sub-filters of the filter array 23 .
  • the pixel array 24 is configured to receive light passing through the filter array 23 to generate electrical signals.
  • the pixel array 24 is configured to receive the light passing through the filter array 23 to generate an electrical signal, which means that the pixel array 24 is used to detect a scene of a given set of subjects passing through the filter array 23
  • the light is photoelectrically converted to generate an electrical signal.
  • the light rays of the scene for a given set of subjects are used to generate image data.
  • the subject is a building
  • the scene of a given set of subjects refers to the scene where the building is located, which may also contain other objects.
  • the pixel array 24 can be an RGBW pixel array, including a plurality of minimum repeating units 241, the minimum repeating unit 241 includes a plurality of pixel groups 242, and the plurality of pixel groups 242 includes a panchromatic pixel group 243 and a color pixel group 244 .
  • Each panchromatic pixel group 243 includes 9 panchromatic pixels 2431
  • each color pixel group 244 includes 9 color pixels 2441 .
  • Each panchromatic pixel 2431 corresponds to a sub-filter 2331 in the panchromatic filter 233, and the panchromatic pixel 2431 receives light passing through the corresponding sub-filter 2331 to generate an electrical signal.
  • Each color pixel 2441 corresponds to a sub-filter 2341 of the color filter 234, and the color pixel 2441 receives light passing through the corresponding sub-filter 2341 to generate an electrical signal.
  • Each pixel includes at least two sub-pixels arranged in an array, and each sub-pixel corresponds to a photosensitive element. That is, each panchromatic pixel 2431 includes at least two sub-pixels 2431a and 2431b arranged in an array, and each sub-pixel corresponds to a photosensitive element; each color pixel 2441 includes at least two sub-pixels 2441a and 2441b arranged in an array, and each sub-pixel corresponds to A photosensitive element.
  • each panchromatic pixel 2431 includes at least two sub-pixels arranged in an array, specifically two sub-pixels arranged in an array, or four sub-pixels arranged in an array, which is not limited in the present application.
  • the image sensor 21 in this embodiment includes a filter array 23 and a pixel array 24, the filter array 23 includes a minimum repeating unit 231, the minimum repeating unit 231 includes a plurality of filter groups 232, and the filter group includes panchromatic filters
  • the light sheet 233 and the color filter 234, the color filter 234 has a narrower spectral response than the panchromatic filter 233, and more light can be obtained when shooting, so there is no need to adjust the shooting parameters without affecting
  • the focusing quality in low light is improved.
  • both stability and quality can be considered.
  • the stability and quality of focusing in low light are both high.
  • each panchromatic filter 233 includes 9 sub-filters 2331
  • each color filter 234 includes 9 sub-filters 2341
  • the pixel array 24 includes a plurality of panchromatic pixels 2431 and a plurality of color pixels 2441
  • each panchromatic pixel 2431 corresponds to a sub-filter 2331 of the panchromatic filter 233
  • each color pixel 2441 corresponds to a sub-filter 2341 of the color filter 234
  • the panchromatic pixel 2431 and the color pixel 2441 It is used to receive light passing through the corresponding sub-filters to generate electrical signals.
  • the phase information of the pixels corresponding to the 9 sub-filters can be combined and output to obtain phase information with a high signal-to-noise ratio.
  • phase information of the pixel corresponding to each sub-filter can be output separately, so as to obtain phase information with high resolution and signal-to-noise ratio, which can adapt to different application scenarios. And it can improve the focus quality in various scenes.
  • the smallest repeating unit 231 in the filter array 23 includes 4 filter groups 232 , and the 4 filter groups 232 are arranged in a matrix.
  • Each filter group 232 comprises a panchromatic filter 233 and a color filter 234, each panchromatic filter 233 and each color filter 234 have 9 sub-filters, then the filter Group 232 includes a total of 36 sub-filters.
  • the pixel array 24 includes a plurality of minimum repeating units 241 corresponding to the plurality of minimum repeating units 231 .
  • Each minimum repeating unit 241 includes 4 pixel groups 242 , and the 4 pixel groups 242 are arranged in a matrix.
  • Each pixel group 242 corresponds to a filter group 232 .
  • the readout circuit 25 is electrically connected to the pixel array 24 for controlling the exposure of the pixel array 24 and reading and outputting the pixel values of the pixel points.
  • the readout circuit 25 includes a vertical drive unit 251 , a control unit 252 , a column processing unit 253 , and a horizontal drive unit 254 .
  • the vertical driving unit 251 includes a shift register and an address decoder.
  • the vertical driving unit 251 includes readout scanning and reset scanning functions.
  • the control unit 252 configures timing signals according to the operation mode, and uses various timing signals to control the vertical driving unit 251 , the column processing unit 253 and the horizontal driving unit 254 to work together.
  • the column processing unit 253 may have an analog-to-digital (A/D) conversion function for converting an analog pixel signal into a digital format.
  • the horizontal driving unit 254 includes a shift register and an address decoder. The horizontal driving unit 254 sequentially scans the pixel array 24 column by column.
  • each filter group 232 includes a color filter 234 and a panchromatic filter 233, and each panchromatic filter 233 in the filter group 232 is arranged on In the first diagonal direction D1, each color filter 234 in the filter set 232 is arranged in the second diagonal direction.
  • the direction of the first diagonal line D1 and the direction of the second diagonal line D2 are different, which can take into account both color performance and low-light focusing quality.
  • the direction of the first diagonal line D1 is different from the direction of the second diagonal line D2. Specifically, the direction of the first diagonal line D1 is not parallel to the direction of the second diagonal line D2, or the direction of the first diagonal line D1 is not parallel to the direction of the second diagonal line.
  • the direction of the diagonal line D2 is vertical, etc.
  • one color filter 234 and one panchromatic filter 233 can be located on the first diagonal line D1, and the other color filter 234 and another panchromatic filter 233 can be located on the second pair of diagonals. Corner line D2.
  • each pixel includes at least two sub-pixels arranged in an array, and each sub-pixel corresponds to a photosensitive element.
  • the photosensitive element is an element capable of converting light signals into electrical signals.
  • the photosensitive element can be a photodiode.
  • each panchromatic pixel 2331 includes 2 sub-pixels d arranged in an array (i.e. 2 photodiodes PD (Left PhotoDiode, Right PhotoDiode)), and each color pixel 2341 includes 2 sub-pixels arranged in an array d (that is, 2 photodiodes PD (Left PhotoDiode, Right PhotoDiode)).
  • each panchromatic pixel 2431 may also include 4 sub-pixels d arranged in an array (that is, 4 photodiodes PD (Up-Left PhotoDiode, Up-Right PhotoDiode, Down-Left PhotoDiode and Down-Right PhotoDiode)), each Each color pixel 2441 includes 4 sub-pixels d arranged in an array (ie, 4 photodiodes PD (Up-Left PhotoDiode, Up-Right PhotoDiode, Down-Left PhotoDiode and Down-Right PhotoDiode)). This application does not limit this.
  • each pixel since each pixel includes at least two sub-pixels arranged in an array, each sub-pixel corresponds to a photosensitive element. Therefore, the phase difference of the pixel array can be calculated based on the phase information of the at least two sub-pixels.
  • the smallest repeating unit 231 in the filter array 23 includes 4 filter groups 232 , and the 4 filter groups 232 are arranged in a matrix.
  • Each filter set 232 includes two panchromatic filters 233 and two color filters 234 .
  • the panchromatic filter 233 includes 9 sub-filters 2331, and the color filter 234 includes 9 sub-filters 2341, then the smallest repeating unit 231 is 12 rows and 12 columns with 144 sub-filters, and the arrangement is as follows:
  • w represents the panchromatic sub-filter 2331
  • a, b and c all represent the color sub-filter 2341 .
  • the panchromatic sub-filter 2331 refers to a sub-filter that can filter out all light rays other than the visible light band
  • the color sub-filter 2341 includes a red sub-filter, a green sub-filter, and a blue sub-filter. filter, magenta sub-filter, cyan sub-filter, and yellow sub-filter.
  • the red sub-filter is a sub-filter for filtering all light except red light
  • the green sub-filter is a sub-filter for filtering all light except green light
  • the blue sub-filter is a sub-filter for filtering A sub-filter for all light except blue
  • a magenta sub-filter for all light except magenta and a cyan sub-filter for all light except cyan A sub-filter for all light rays
  • the yellow sub-filter is a sub-filter for filtering out all light rays except yellow light.
  • a can be red sub-filter, green sub-filter, blue sub-filter, magenta sub-filter, cyan sub-filter or yellow sub-filter
  • b can be red sub-filter, Green sub-filter, blue sub-filter, magenta sub-filter, cyan sub-filter or yellow sub-filter
  • c can be red sub-filter, green sub-filter, blue sub-filter filter, magenta sub-filter, cyan sub-filter, or yellow sub-filter.
  • b is the red sub-filter, a is the green sub-filter, c is the blue sub-filter; or, c is the red sub-filter, a is the green sub-filter, b is the blue sub-filter Filter; another example, c is a red sub-filter, a is a green sub-filter, b is a blue sub-filter; or, a is a red sub-filter, b is a blue sub-filter , c is a green sub-filter, etc., which are not limited here; for another example, b is a magenta sub-filter, a is a cyan sub-filter, b is a yellow sub-filter, etc.
  • the color filter may further include sub-filters of other colors, such as an orange sub-filter, a purple sub-filter, etc., which are not limited here.
  • the minimum repeating unit 231 in the filter array 23 includes 4 filter groups 232 , and the 4 filter groups 232 are arranged in a matrix.
  • Each filter set 232 includes a color filter 234 and a panchromatic filter 233, and each color filter 234 in the filter set 232 is arranged in the direction of the first diagonal line D1, and the filter set 232
  • Each panchromatic filter 233 in is arranged in the direction of the second diagonal line D2.
  • the pixels of the pixel array (not shown in FIG. 7 , can refer to FIG. 6 ) correspond to the sub-filters of the filter array, and each pixel includes at least two sub-pixels arranged in an array, and each sub-pixel corresponds to one Photosensitive element.
  • each filter set 232 includes 2 panchromatic filters 233 and 2 color filters 234, the panchromatic filters 233 include 9 sub-filters 2331, and the color filters Nine sub-filters 2341 are included in the sheet 234, and the minimum repeating unit 231 is 12 rows and 12 columns with 144 sub-filters, as shown in Figure 5, the arrangement is:
  • w represents a panchromatic sub-filter
  • a, b, and c all represent color sub-filters.
  • the advantage of quad is that it can locally combine pixels by 2 by 2 and binning by 3 by 3 to obtain images of different resolutions, and has a high signal-to-noise ratio.
  • the quad full-size output has high pixels, and a full-size full-resolution image is obtained with higher definition.
  • the advantage of RGBW is that it uses W pixels to increase the overall light intake of the image, thereby improving the signal-to-noise ratio of the image quality.
  • a focus control method is provided, which is applied to the image sensor in the above embodiment, the image sensor includes a pixel array and a filter array, and the method includes:
  • Operation 820 determine a phase information output mode adapted to the light intensity of the current shooting scene; wherein, in different phase information output modes, the sizes of the output phase arrays are different.
  • the light intensity of the current shooting scene is not the same, and since the sensitivity of the RGB pixel array is different under different light intensities, under some light intensities, the RGB pixel array calculates The accuracy of the phase difference is low, which in turn leads to a significant decrease in the accuracy of focusing.
  • light intensity is also called light intensity.
  • Light intensity is a physical term, referring to the luminous flux of visible light received per unit area, referred to as illuminance, and the unit is Lux (Lux or lx).
  • Light Intensity is a quantity that indicates how strong or weak the light is and how much the surface area of an object is illuminated. The following table shows the light intensity values under different weather and locations:
  • the phase information output mode adapted to the light intensity of the scene, and then use different phase information output modes to output the phase information of the pixel array.
  • the phase information output mode refers to a mode of processing the original phase information based on the original phase information of the pixel array to generate the final output phase information of the pixel array.
  • the sizes of the output phase arrays are different. That is, under different light intensities of the current shooting scene, the sizes of the phase arrays output by the same pixel array are different.
  • the phase information corresponding to the same pixel array is directly output as the phase array corresponding to the pixel array or combined to a certain extent to generate the phase array corresponding to the pixel array. For example, if the light intensity of the current shooting scene is relatively high, the phase information corresponding to the same pixel array may be directly output as the phase array corresponding to the pixel array.
  • the size of the output phase array is equal to the size of the pixel array.
  • phase information corresponding to the same pixel array may be combined to a certain extent to generate a phase array corresponding to the pixel array.
  • the size of the output phase array is smaller than the size of the pixel array.
  • phase array corresponding to the pixel array In operation 840, output a phase array corresponding to the pixel array according to a phase information output mode; wherein, the phase array includes phase information corresponding to a target pixel in the pixel array.
  • the phase information corresponding to the pixel array can be output according to the phase information output mode. Specifically, when outputting the phase information corresponding to the pixel array, it may be output in the form of a phase array.
  • the phase array includes phase information corresponding to the pixel array.
  • the phase information corresponding to the same pixel array is directly output as the phase array corresponding to the pixel array or combined to a certain extent , to generate a phase array corresponding to the pixel array, which is not limited in this application.
  • Operation 860 calculate the phase difference of the pixel array based on the phase array, and perform focus control according to the phase difference.
  • the phase difference of the pixel array can be calculated based on the phase information in the phase array. Assuming that the phase array of the pixel array in the second direction can be obtained, the phase difference is calculated based on two adjacent phase information in the second direction, and finally the phase difference of the entire pixel array in the second direction is obtained. Assuming that the phase array of the pixel array in the first direction can be obtained, the phase difference is calculated based on two adjacent phase information in the first direction, and finally the phase difference of the entire pixel array in the first direction is obtained, and the second direction is the same as the first The direction is different.
  • the second direction may be the vertical direction of the pixel array
  • the first direction may be the horizontal direction of the pixel array
  • the second direction and the first direction are perpendicular to each other.
  • the phase difference of the entire pixel array in the second direction and the first direction can be obtained at the same time, and the phase difference of the pixel array in other directions can also be calculated, such as the diagonal direction (including the first diagonal direction, and the second The diagonal direction is perpendicular to the second diagonal direction), etc., which are not limited in this application.
  • the phase difference parallel to this direction is almost 0, obviously cannot be based on the collected A phase difference parallel to this direction is used for focusing. Therefore, if the preview image corresponding to the current shooting scene includes texture features in the first direction, the phase difference of the pixel array in the second direction is calculated based on the phase array of the pixel array in the second direction. Focusing control is performed according to the phase difference of the pixel array in the second direction.
  • the preview image includes the texture feature in the first direction, which means that the preview image includes horizontal stripes, which may be solid color stripes in the horizontal direction.
  • focus control is performed based on the phase difference in the vertical direction.
  • the preview image corresponding to the current shooting scene includes texture features in the second direction
  • focus control is performed based on the phase difference in the first direction. If the preview image corresponding to the current shooting scene includes texture features in the first diagonal direction, focus control is performed based on the phase difference in the second diagonal direction, and vice versa. In this way, for the texture features in different directions, the phase difference can be accurately collected, and then the focus can be accurately focused.
  • a phase information output mode adapted to the light intensity of the current shooting scene is determined according to the light intensity of the current shooting scene; wherein, in different phase information output modes, the sizes of the output phase arrays are different.
  • a phase array corresponding to the pixel array is output; wherein, the phase array includes phase information corresponding to the pixel array.
  • the phase difference of the pixel array is calculated based on the phase array, and focus control is performed according to the phase difference.
  • phase information output modes can be adopted for the same pixel array, and phase arrays of different sizes can be output based on the original phase information. Since the signal-to-noise ratios of phase arrays with different sizes are different, the accuracy of the phase information output under different light intensities can be improved, thereby improving the accuracy of focus control.
  • the specific implementation operation of determining the phase information output mode adapted to the light intensity of the current shooting scene includes:
  • a phase information output mode suitable for the light intensity of the current shooting scene is determined.
  • the light intensities may be divided into different light intensity ranges in order of magnitude.
  • the preset threshold of the light intensity can be determined according to the exposure parameters and the size of the pixels in the pixel array.
  • the exposure parameters include shutter speed, lens aperture size and sensitivity (ISO, light sensitivity ordinance).
  • phase information output modes for different light intensity ranges. Specifically, according to the order of the light intensity in the light intensity range, the size of the phase array output by the phase information output mode set for different light intensity ranges decreases successively.
  • the light intensity range is used as the target light intensity range to which the light intensity of the current shooting scene belongs.
  • the phase information output mode corresponding to the target light intensity range is used as the phase information output mode adapted to the light intensity of the current shooting scene.
  • phase information output mode adapted to the light intensity of the current shooting scene when determining the phase information output mode adapted to the light intensity of the current shooting scene according to the light intensity of the current shooting scene, since different light intensity ranges correspond to different phase information output modes, first determine The target light intensity range to which the light intensity of the current shooting scene belongs. Then, according to the target light intensity range, determine the phase information output mode adapted to the light intensity of the current shooting scene. Different phase information output modes are set in advance for different light intensity ranges, and the sizes of the phase arrays output in each phase information output mode are different. Therefore, based on the light intensity of the current shooting scene, the phase information of the pixel array can be calculated more finely, so as to achieve more accurate focusing.
  • the phase information output mode includes the full-size output mode and the first-size output mode, and the size of the phase array in the full-size output mode is larger than that in the first-size output mode.
  • Operation 824a if the light intensity of the current shooting scene is greater than the first preset threshold, determine that the phase information output mode adapted to the light intensity of the current shooting scene is the full-size output mode;
  • the phase information output mode corresponding to the light intensity range is a full-scale output mode. Then, if it is determined that the light intensity of the current shooting scene is greater than the first preset threshold, the light intensity of the current shooting scene falls within the light intensity range. That is, it is determined that the phase information output mode adapted to the light intensity of the current shooting scene is the full-size output mode.
  • the full-scale output mode is used to output the phase array, that is, to output all the original phase information of the pixel array to generate the phase array of the pixel array.
  • the phase information output mode corresponding to the light intensity range is the first size output mode. If it is determined that the light intensity of the current shooting scene is greater than the second preset threshold and less than or equal to the first preset threshold, then the light intensity of the current shooting scene falls within the light intensity range. That is, it is determined that the phase information output mode adapted to the light intensity of the current shooting scene is the first size output mode. Wherein, outputting the phase array in the first size output mode is to combine and output the original phase information of the pixel array to generate the phase array of the pixel array.
  • the size of the phase array in the full-size output mode is greater than the size of the phase array in the first-size output mode
  • the light intensity of the current shooting scene is greater than the first preset threshold
  • the phase information output mode adapted to the light intensity is the full-scale output mode. If the light intensity of the current shooting scene is greater than the second preset threshold and less than or equal to the first preset threshold, it is determined that the phase information output mode adapted to the light intensity of the current shooting scene is the first size output mode.
  • the full-size output mode will be used to output the phase array with the same size as the pixel array
  • the first-size output mode will be used to output the phase array with the same size as the pixel array. Smaller phased arrays. That is, when the light intensity of the current shooting scene is second, the signal-to-noise ratio of the phase information is improved by reducing the phase array.
  • the pixel array can be an RGBW pixel array, including a plurality of minimum repeating units 241, the minimum repeating unit 241 includes a plurality of pixel groups 242, and the plurality of pixel groups 242 includes panchromatic pixel groups 243 and color pixels Group 244.
  • Each panchromatic pixel group 243 includes 9 panchromatic pixels 2431
  • each color pixel group 244 includes 9 color pixels 2441 .
  • Each panchromatic pixel 2431 includes 2 sub-pixels arranged in an array
  • each color pixel 2441 includes 2 sub-pixels arranged in an array.
  • phase information output mode is the full-size output mode
  • the phase information output mode is the full-size output mode
  • Operation 1040 for each target pixel group, acquire phase information of sub-pixels of each pixel in the target pixel group;
  • Operation 1060 Generate a full-size phase array corresponding to the pixel array according to the phase information of the sub-pixels of the target pixel; the size of the full-size phase array is the size of 6 ⁇ 3 pixels arranged in the array.
  • This embodiment is a specific implementation operation of outputting the phase array corresponding to the pixel array according to the full-size output mode when the light intensity of the current shooting scene is greater than the first preset threshold.
  • the first preset threshold may be 2000 lux, which is not limited in this application. That is, it is in an environment where the light intensity is greater than 2000lux.
  • the color pixel group is determined from the pixel array as the target pixel group for calculating the phase information. Because when the light intensity of the current shooting scene is greater than the first preset threshold, that is, in a scene with sufficient light, due to the high sensitivity of panchromatic pixels, it is easy to be saturated in a scene with sufficient light, and the correct image will not be obtained after saturation.
  • phase information so the phase information of the color pixel group can be used to realize phase focusing (PDAF) at this time.
  • the phase focusing can be realized by using the phase information of part of the color pixel groups in the pixel array, or part of the pixels in the part of the pixel groups can be used to realize the phase focusing, which is not limited in this application. Since only the phase information of the color pixel group is used for phase focusing at this time, the data volume of the output phase information is reduced, thereby improving the efficiency of phase focusing.
  • each pixel includes at least two sub-pixels arranged in an array, and each sub-pixel corresponds to a photosensitive element. Assuming that each pixel includes two sub-pixels arranged in an array at this time, the two sub-pixels may be arranged up and down or left and right, which is not limited in this application.
  • phase information of the sub-pixel of each target pixel in the target pixel group is obtained, that is, from each target pixel group
  • the phase information of the two sub-pixels arranged left and right in the pixel is obtained.
  • output the phase information of all target pixels as a full-scale phase array corresponding to the pixel array.
  • a pixel array may include 2 red pixel groups, 4 green pixel groups, 2 blue pixel groups, and 8 panchromatic pixel groups.
  • a represents green
  • b represents red
  • c represents blue
  • w represents full color.
  • the phase information of the red pixel group is calculated for the red pixel group 244.
  • the red pixel group includes nine red pixels arranged in a 3 ⁇ 3 array, numbered sequentially as red pixel 1, red pixel 2, red pixel 3, red pixel 4. Red pixel 5, red pixel 6, red pixel 7, red pixel 8, red pixel 9. Wherein, each pixel includes two sub-pixels arranged left and right, and each sub-pixel corresponds to a photosensitive element.
  • the red pixel 1 includes two sub-pixels arranged on the left and right, and the phase information of these two sub-pixels is L1 and R1 respectively;
  • the red pixel 2 includes two sub-pixels arranged on the left and right, and the phase information of these two sub-pixels are respectively L2 and R2
  • the red pixel 3 includes two sub-pixels arranged left and right L3, R3, and the phase information of these two sub-pixels is respectively;
  • the red pixel 4 includes two sub-pixels arranged left and right, and the phase information of these two sub-pixels is respectively L4, R4
  • the red pixel 5 includes two sub-pixels arranged left and right, and the phase information of these two sub-pixels is L5 and R5 respectively;
  • the red pixel 6 includes two sub-pixels arranged left and right, and the phase information of these two sub-pixels is L6 and R6 respectively ;
  • the red pixel 7 includes two sub-pixels arranged left and right, and the phase information of these two sub-
  • L1, R1, L2, R2, L3, R3, L4, R4, L5, R5, L6, R6, L7, R7, L8, R8, L9, R9 are arranged in sequence to generate a full-scale phase array.
  • the size of the full-size phase array is equivalent to the size of 6 ⁇ 3 pixels arranged in the array.
  • the pixel size refers to the area size of a pixel, and the area size is related to the length and width of the pixel.
  • a pixel is the smallest photosensitive unit on a photosensitive device (CCD or CMOS) of a digital camera.
  • CCD is the abbreviation of charge coupled device (charge coupled device), CMOS (Complementary Metal-Oxide-Semiconductor), which can be interpreted as complementary metal oxide semiconductor.
  • CMOS Complementary Metal-Oxide-Semiconductor
  • a pixel has no fixed size, and the size of a pixel is related to the size and resolution of the display screen.
  • the size of the 6 ⁇ 3 pixels arranged in the array is: 6 ⁇ 0.0778 mm in length and 3 ⁇ 0.0778 mm in width.
  • the size of the full-scale phased array is 6 ⁇ 0.0778 mm in length and 3 ⁇ 0.0778 mm in width.
  • the pixels may not be rectangles with equal length and width, and the pixels may also have other heterogeneous structures, which are not limited in this application.
  • the above method is also used to generate their respective full-size phase arrays. Based on all the full-scale phase arrays, the phase information of the pixel array is obtained.
  • the phase array can be input into the ISP (Image Signal Processing), and the phase difference of the pixel array can be calculated based on the phase array through the ISP. Then, calculate the defocus distance based on the phase difference, and calculate the DAC code value corresponding to the defocus distance. Finally, the code value is converted into a driving current by the driver IC of the motor (VCM), and the motor drives the lens to move to the clear position.
  • ISP Image Signal Processing
  • the phase array corresponding to the pixel array is output according to the full-size output mode.
  • the color pixel groups in the pixel array are used as target pixel groups, and for each target pixel group, phase information of sub-pixels of each pixel in the target pixel group is acquired. Finally, according to the phase information of the sub-pixels of the target pixel, a full-scale phase array corresponding to the pixel array is generated. Since only the phase information of the color pixel group is used for phase focusing at this time, the data volume of the output phase information is reduced, thereby improving the efficiency of phase focusing.
  • phase information output mode is the first size output mode
  • specific implementation operation of the phase array corresponding to the pixel array including:
  • Operation 1220 using at least one of a color pixel group and a panchromatic pixel group in the pixel array as a target pixel group;
  • This embodiment is to output the phase array corresponding to the pixel array according to the first size output mode when the light intensity of the current shooting scene is greater than the second preset threshold and less than or equal to the first preset threshold.
  • the second preset threshold may be 500 lux, which is not limited in this application. That is, it is in an environment where the light intensity is greater than 500lux and less than or equal to 2000lux.
  • FIG. 12 firstly, at least one of a color pixel group and a panchromatic pixel group is determined from the pixel array as a target pixel group for calculating phase information.
  • the panchromatic pixels are not easy to be saturated in a scene with slightly weak light , so the phase information of the panchromatic pixel group can be used at this time to realize phase focusing (PDAF).
  • the color pixels can also obtain accurate phase information in the scene with weak light, so at this time, the phase information of the color pixel group can also be used to realize phase focusing (PDAF).
  • the color pixel group when outputting the phase array corresponding to the pixel array according to the first size output mode, the color pixel group can be selected as the target pixel group, the panchromatic pixel group can also be selected as the target pixel group, and the color pixel group and the full color pixel group can also be selected.
  • the color pixel group is used as the target pixel group, which is not limited in this application.
  • the phase focusing can be realized by using the phase information of a part of the color pixel groups in the pixel array, or using part of the color pixels in the part of the color pixel groups to realize the phase focusing, This application does not limit this.
  • the phase information of some panchromatic pixel groups in the pixel array can be used to achieve phase focusing, or part of the panchromatic pixel groups in the partial panchromatic pixel group can be used to achieve phase focusing.
  • Phase focusing is implemented, which is not limited in this application.
  • the phase information of a part of the panchromatic pixel group and a part of the color pixel group in the pixel array can be used to achieve phase focusing, or a part of the panchromatic pixel group can be used.
  • Part of the full-color pixels in the pixel group and part of the color pixels in the part of the color pixel group implement phase focusing, which is not limited in this application.
  • phase information of some pixel groups can be used for phase focusing, or only the phase information of some pixels in some pixel groups can be used for phase focusing, so the data volume of the output phase information is reduced, thereby improving improve the efficiency of phase focusing.
  • Operation 1240 for each target pixel group, acquire phase information of multiple sets of adjacent two sub-pixels along the second direction with one sub-pixel as the sliding window step size; wherein, the second direction is perpendicular to the first direction;
  • Operation 1260 combining multiple sets of phase information of two adjacent sub-pixels to generate multiple sets of first combined phase information
  • each pixel includes at least two sub-pixels arranged in an array, and each sub-pixel corresponds to a photosensitive element.
  • the two sub-pixels may be arranged up and down or left and right, which is not limited in this application.
  • two sub-pixels arranged left and right are selected for illustration, then, for each target pixel group, the phases of two adjacent sub-pixels are obtained along the second direction with one sub-pixel as the sliding window step information.
  • the phase information of two adjacent sub-pixels is combined to generate the first combined phase information.
  • the target pixel group is a panchromatic pixel group in the pixel array
  • the phase information of two adjacent sub-pixels can be obtained along the second direction with one sub-pixel as the sliding window step size, and 12 pairs of adjacent two-pixel phase information can be obtained. phase information of sub-pixels.
  • the phase information of the 12 groups of adjacent two sub-pixels are respectively combined to generate 12 pieces of first combined phase information.
  • a pixel array may include 2 red pixel groups, 4 green pixel groups, 2 blue pixel groups and 8 panchromatic pixel groups. Assuming that all panchromatic pixel groups in the pixel array are used as target pixel groups, then for the 8 panchromatic pixel groups included in the pixel array, the phase information of each pixel group is sequentially calculated. For example, the phase information of the panchromatic pixel group is calculated for the panchromatic pixel group.
  • the panchromatic pixel group includes 9 panchromatic pixels arranged in a 3 ⁇ 3 array, which are sequentially numbered as panchromatic pixel 1, panchromatic pixel 2, panchromatic pixel Color pixel 3, panchromatic pixel 4, panchromatic pixel 5, panchromatic pixel 6, panchromatic pixel 7, panchromatic pixel 8, panchromatic pixel 9.
  • each pixel includes two sub-pixels arranged left and right, and each sub-pixel corresponds to a photosensitive element.
  • panchromatic pixel 1 includes two sub-pixels arranged on the left and right, and the phase information of these two sub-pixels is L1 and R1 respectively;
  • panchromatic pixel 2 includes two sub-pixels arranged on the left and right, and the phase information of these two sub-pixels is respectively L2 , R2;
  • panchromatic pixel 3 includes two sub-pixels arranged left and right L3, R3, the phase information of these two sub-pixels are respectively;
  • panchromatic pixel 4 includes two sub-pixels arranged left and right, the phase information of these two sub-pixels are respectively are L4 and R4;
  • panchromatic pixel 5 includes two sub-pixels arranged left and right, and the phase information of these two sub-pixels is respectively L5 and R5;
  • panchromatic pixel 6 includes two sub-pixels arranged left and right, and the phase information of these two sub-pixels is The information is L6 and R6 respectively;
  • the panchromatic pixel 7 includes two sub-pixels arranged
  • phase information of 12 groups of adjacent two sub-pixels is acquired along the second direction with one sub-pixel as the sliding window step.
  • the phase information of the 12 groups of adjacent two sub-pixels are: L1 and L4, L2 and L5, L3 and L6, L4 and L7, L5 and L8, L6 and L9; R1 and R4, R2 and R5, R3 and R6 , R4 and R7, R5 and R8, R6 and R9.
  • phase information of the 12 groups of adjacent two sub-pixels are respectively combined to generate 12 pieces of first combined phase information.
  • combine L1 and L4 to generate the first combined phase information L 1 , L2 and L5 are combined to generate the first combined phase information L 2
  • L3 and L6 are combined to generate the first combined phase information L 3
  • L4 and L7 are combined to generate the first combined phase information
  • R1 and R4 are combined to generate the first combined phase information R 1 , R2 and R5 are combined to generate
  • the first combined phase information R 2 , R3 and R6 are combined to generate the first combined phase information R 3 , R4 and R7 are combined to generate the first combined phase information R 4 , R5 and R8 are combined to generate the first combined phase information R 5 , R6 and R9
  • the combination generates first combined phase information R 6 .
  • Operation 1280 Generate a phase array of a first size corresponding to the pixel array according to multiple sets of first combined phase information, where the size of the phase array of the first size is the size of 4 ⁇ 2 pixels arranged in the array.
  • a phase array of the first size corresponding to the pixel array can be generated according to the first combined phase information. Specifically, if the target pixel group is a panchromatic pixel group in the pixel array, the phase information of two adjacent sub-pixels is combined to generate 12 sets of first combined phase information. Then, the 12 sets of first combined phase information may be directly output as a first-size phase array corresponding to the panchromatic pixel array.
  • L 1 , R 1 , L 2 , R 2 , L 3 , R 3 , L 4 , R 4 , L 5 , R 5 , L 6 , and R 6 are arranged in sequence to generate a first-size phase array.
  • the size of the phase array of the first size is equivalent to the size of 6 ⁇ 2 pixels arranged in the array.
  • the transform processing here may be processing such as performing correction on the 12 sets of first combined phase information, which is not limited in this application.
  • the phase arrays equivalent to 6 ⁇ 2 pixels can be combined into a phase array with 4 ⁇ 2 pixels.
  • the present application does not limit the specific size of the combined phase array.
  • the pixel size refers to the area size of a pixel, and the area size is related to the length and width of the pixel.
  • a pixel is the smallest photosensitive unit on a photosensitive device (CCD or CMOS) of a digital camera.
  • the size of the 4 ⁇ 2 pixels arranged in the array is: 4 ⁇ 0.0778 mm in length and 2 ⁇ 0.0778 mm in width.
  • the size of the full-scale phased array is 4 ⁇ 0.0778 mm in length and 2 ⁇ 0.0778 mm in width.
  • the pixels may not be rectangles with equal length and width, and the pixels may also have other heterogeneous structures, which are not limited in this application.
  • the above method is also used to generate their own phase arrays of the first size. Based on all phase arrays of the first size, the phase information of the pixel array is obtained.
  • the phase array can be input into the ISP, and the phase difference of the pixel array can be calculated by the ISP based on the phase array. Then, calculate the defocus distance based on the phase difference, and calculate the DAC code value corresponding to the defocus distance. Finally, the code value is converted into a driving current by the driver IC of the motor (VCM), and the motor drives the lens to move to the clear position.
  • VCM driver IC of the motor
  • the color pixel group or panchromatic when the light intensity of the current shooting scene is greater than the second preset threshold and less than or equal to the first preset threshold, because the light intensity at this time is slightly weaker, the color pixel group or panchromatic
  • the phase information collected by the pixel groups is not very accurate, and some color pixel groups or part of the panchromatic pixel groups may not collect phase information. Therefore, at least one of the color pixel group or the panchromatic pixel group in the pixel array is used as the target pixel group, and for each target pixel group, the phase information of the sub-pixels is combined to a certain extent by using the first size output mode, The accuracy of the output phase information is improved, and the signal-to-noise ratio of the phase information is improved. Ultimately, focusing accuracy can be improved by performing phase focusing based on the phase array of the first size corresponding to the pixel array.
  • operation 1280 is to generate a first-size phase array corresponding to the pixel array according to multiple sets of first combined phase information, including:
  • a first-sized phase array of the pixel array in the first direction is generated.
  • two adjacent pieces of first combined phase information are determined from the first combined phase information. Specifically, it is determined whether the sub-pixels used to generate the first combined phase information are at the same position in the pixel; if yes, the two first combined phase information are determined to be two adjacent first combined phase information. Multiple sets of adjacent two first combined phase information are combined again to generate target phase information. By outputting the target phase information, a phase array of the first size of the pixel array in the first direction is generated.
  • the first combined phase information L 1 is the sub-pixels of the left half of panchromatic pixel 1 and panchromatic pixel 4
  • the sub-pixels used to generate the first combined phase information L2 are panchromatic pixels 2 and the subpixel in the left half of panchromatic pixel 5.
  • the sub-pixels of excellent pixel 1 and the left half of pan-color pixel 4 the sub-pixels of pan-color pixel 2 and the left half of pan-color pixel 5 are in the same position (all on the left side) in the respective pixels. Therefore, it is determined that the first combined phase information L 1 and the first combined phase information L 2 are two adjacent first combined phase information. Similarly, determine the first combined phase information L 2 and the first combined phase information L 3 , the first combined phase information L 4 and the first combined phase information L 5 , the first combined phase information L 5 and the first combined phase information L 6 are two adjacent first merged phase information.
  • the first combined phase information R 1 and the first combined phase information R 2 are two adjacent first combined phase information.
  • the two adjacent pieces of first combined phase information are combined again to generate target phase information. That is, the first combined phase information L 1 and the first combined phase information L 2 are combined again to generate target phase information. Similarly, the first combined phase information L2 and the first combined phase information L3 are combined again to generate the target phase information; the first combined phase information L4 and the first combined phase information L5 are combined again to generate the target phase information Phase information: combining the first combined phase information L 5 and the first combined phase information L 6 again to generate target phase information.
  • first combined phase information R 1 and the first combined phase information R 2 are combined again to generate the target phase information; the first combined phase information R 2 and the first combined phase information R 3 are combined again to generate the target phase information Phase information; combine the first combined phase information R4 and the first combined phase information R5 again to generate the target phase information; combine the first combined phase information R5 and the first combined phase information R6 again to generate the target phase information.
  • phase array of the first size of the pixel array in the first direction is generated.
  • the color pixel group or panchromatic when the light intensity of the current shooting scene is greater than the second preset threshold and less than or equal to the first preset threshold, because the light intensity at this time is slightly weaker, the color pixel group or panchromatic
  • the phase information collected by the pixel groups is not very accurate, and some color pixel groups or part of the panchromatic pixel groups may not collect phase information. Therefore, at least one of the color pixel group or the panchromatic pixel group in the pixel array is used as the target pixel group, and for each target pixel group, the phase information of the sub-pixel is combined twice by using the first size output mode to improve The accuracy of the output phase information improves the signal-to-noise ratio of the phase information.
  • focusing accuracy can be improved by performing phase focusing based on the phase array of the first size corresponding to the pixel array.
  • the target pixel group includes a color pixel group and a panchromatic pixel group
  • the target phase information generate a first-size phase array of the pixel array in the first direction, including:
  • a first-sized phase array of the pixel array in the first direction is generated.
  • the determined target pixel group includes a color pixel group and a panchromatic pixel group
  • weights between different pixel groups may be considered.
  • the first phase weight corresponding to the color pixel group and the second phase weight corresponding to the panchromatic pixel group can be determined according to the light intensity of the current shooting scene.
  • the color pixel groups correspond to different first phase weights under different light intensities
  • the panchromatic pixel groups correspond to different second phase weights under different light intensities.
  • the first phase weight corresponding to the color pixel group is 40%, among which, the phase weight of the green pixel group is 20%, the phase weight of the red pixel group is 10%, and the blue pixel group is 10%.
  • the phase weight of the pixel group is 10%.
  • the second phase weight corresponding to the panchromatic pixel group is 60%, which is not limited in this application.
  • a phase array of the first size of the pixel array in the first direction can be generated. For example, for this pixel array, based on the target phase information of the first red pixel group and its phase weight of 10%, the target phase information of the second red pixel group and its phase weight of 10%, the target phase of the first blue pixel group Information and phase weight 10%, target phase information and phase weight 10% for the second blue pixel group, and target phase information and phase weight 20% for each green pixel group, and target phase information for each panchromatic pixel group and the phase weight of 60%, and calculate the phase information of the pixel array in the first direction by summing together, that is, the phase array of the first size is obtained.
  • the target phase information of the color pixel group and its first phase weight, and the target pixel group of the panchromatic pixel group can be used.
  • the phase information and its second phase weight generate a phase array of the first size of the pixel array.
  • the phase array of the first size of the pixel array is jointly generated, which can improve the comprehensiveness of the phase information.
  • the phase weights of the target phase information of the color pixel group and the panchromatic pixel group are different under different light intensities. In this way, the accuracy of the phase information can be improved by adjusting the weights under different light intensities.
  • the pixel array can be an RGBW pixel array, including a plurality of minimum repeating units 241, the minimum repeating unit 241 includes a plurality of pixel groups 242, and the plurality of pixel groups 242 includes panchromatic pixel groups 243 and color pixel groups 244.
  • Each panchromatic pixel group 243 includes 9 panchromatic pixels 2431
  • each color pixel group 244 includes 9 color pixels 2441 .
  • Each panchromatic pixel 2431 includes 2 sub-pixels arranged in an array
  • each color pixel 2441 includes 2 sub-pixels arranged in an array.
  • the phase information output mode when the pixel array is an RGBW pixel array, the phase information output mode also includes a second size output mode and a third size output mode; wherein, the size of the phase array in the first size output mode is larger than the second size output mode.
  • operation 824 determines the phase information output mode adapted to the light intensity of the current shooting scene, including:
  • Operation 824c if the light intensity of the current shooting scene is greater than the third preset threshold and less than or equal to the second preset threshold, determine that the phase information output mode adapted to the light intensity of the current shooting scene is the second size output mode;
  • Operation 824d if the light intensity of the current shooting scene is less than or equal to the third preset threshold, determine that the phase information output mode adapted to the light intensity of the current shooting scene is the third size output mode; the second preset threshold is greater than the third preset threshold.
  • the phase information output mode corresponding to the light intensity range is the second size output mode. If it is determined that the light intensity of the current shooting scene is greater than the third preset threshold and less than or equal to the second preset threshold, then the light intensity of the current shooting scene falls within the light intensity range. That is, it is determined that the phase information output mode adapted to the light intensity of the current shooting scene is the second size output mode.
  • the third preset threshold may be 50 lux, which is not limited in this application. That is, at dusk or in an environment where the light intensity is greater than 50lux and less than or equal to 500lux.
  • outputting the phase array in the second size output mode is to combine and output the original phase information of the pixel array to generate the phase array of the pixel array.
  • the size of the pixel array is larger than the size of the phase array of the pixel array. For example, if the size of the pixel array is 12 ⁇ 12, the size of the phase array of each target pixel group in the pixel array is 2 ⁇ 1, and the size of the phase array is not limited in this application.
  • the phase information output mode corresponding to the light intensity range is the third size output mode. Then, if it is determined that the light intensity of the current shooting scene is less than or equal to the third preset threshold, the light intensity of the current shooting scene falls within the light intensity range. That is, it is determined that the phase information output mode adapted to the light intensity of the current shooting scene is the third size output mode.
  • the third-size output mode is used to output the phase array, that is, the original phase information of the pixel array is combined and output to generate the phase array of the pixel array.
  • the size of the pixel array is larger than the size of the phase array of the pixel array. For example, if the size of the pixel array is 12 ⁇ 12, the size of the phase array of the pixel array is 4 ⁇ 2, and the size of the pixel array and the size of the phase array are not limited in this application.
  • the phase information output mode adapted to the light intensity of the current shooting scene is the second size output mode. If the light intensity of the current shooting scene is less than or equal to the third preset threshold, it is determined that the phase information output mode adapted to the light intensity of the current shooting scene is the third size output mode.
  • the second-size output mode is used to output the phase array with the same size as the pixel array
  • the third-size output mode is used to output the phase array with the same size as the pixel array. Smaller size phased arrays. That is, when the light intensity of the current shooting scene is second, the signal-to-noise ratio of the phase information is improved by reducing the phase array.
  • At least two photosensitive elements corresponding to each target pixel in the target pixel group are arranged along the first direction; if the phase information output mode is the second size output mode, as shown in FIG. 14 , then according to the phase information Output mode, output the phase array corresponding to the pixel array, including:
  • Operation 1420 using the color pixel group and the panchromatic pixel group in the pixel array as the target pixel group, or using the panchromatic pixel group as the target pixel group;
  • This embodiment is to output the phase array corresponding to the pixel array according to the second size output mode when the light intensity of the current shooting scene is greater than the third preset threshold and less than or equal to the second preset threshold.
  • the color pixel group and the panchromatic pixel group in the pixel array are used as the target pixel group, or the panchromatic pixel group is used as the target pixel group for calculating phase information.
  • the panchromatic pixels can receive more light information, so At this time, the phase information of the panchromatic pixel group can be used to implement phase focusing (PDAF). And in the scene with weaker light, the color pixel can also assist the panchromatic pixel to obtain accurate phase information in the scene with weaker light, so at this time, the phase information of the color pixel group and the panchromatic pixel group can also be used to Phase focusing (PDAF) can be achieved, and phase focusing (PDAF) can also be achieved using only the phase information of the panchromatic pixel group.
  • PDAF Phase focusing
  • PDAF phase focusing
  • the color pixel group and the panchromatic pixel group can be selected as the target pixel group, and the panchromatic pixel group can also be selected as the target pixel group. No limit.
  • the phase information of some panchromatic pixel groups in the pixel array can be used to achieve phase focusing, or part of the panchromatic pixel groups in the partial panchromatic pixel group can be used to achieve phase focusing.
  • Phase focusing is implemented, which is not limited in this application.
  • the target pixel group is a color pixel group and a panchromatic pixel group
  • the phase information of a part of the panchromatic pixel group and a part of the color pixel group in the pixel array can be used to achieve phase focusing, or a part of the panchromatic pixel group can be used.
  • Part of the full-color pixels in the pixel group and part of the color pixels in the part of the color pixel group implement phase focusing, which is not limited in this application.
  • phase information of some pixel groups can be used for phase focusing, or only the phase information of some pixels in some pixel groups can be used for phase focusing, so the data volume of the output phase information is reduced, thereby improving improve the efficiency of phase focusing.
  • Operation 1440 for each target pixel group, acquire phase information of multiple groups of three adjacent sub-pixels along the second direction;
  • Operation 1460 combining multiple sets of phase information of three adjacent sub-pixels to generate multiple sets of second combined phase information
  • each pixel includes at least two sub-pixels arranged in an array, and each sub-pixel corresponds to a photosensitive element.
  • the two sub-pixels may be arranged up and down or left and right, which is not limited in this application.
  • two sub-pixels arranged left and right are selected for illustration, then, for each target pixel group, phase information of multiple sets of adjacent three sub-pixels is acquired along the second direction. Then, the phase information of two adjacent sub-pixels is combined to generate the first combined phase information.
  • the target pixel group is a panchromatic pixel group in the pixel array
  • multiple groups of phase information of three adjacent sub-pixels are obtained along the second direction, and phase information of six groups of three adjacent sub-pixels can be obtained. Then, the phase information of the 6 groups of adjacent three sub-pixels are respectively combined to generate 6 groups of first combined phase information.
  • a pixel array may include 2 red pixel groups, 4 green pixel groups, 2 blue pixel groups and 8 panchromatic pixel groups. Assuming that all panchromatic pixel groups in the pixel array are used as target pixel groups, then for the 8 panchromatic pixel groups included in the pixel array, the phase information of each pixel group is sequentially calculated. For example, the phase information of the panchromatic pixel group is calculated for the panchromatic pixel group.
  • the panchromatic pixel group includes 9 panchromatic pixels arranged in a 3 ⁇ 3 array, which are sequentially numbered as panchromatic pixel 1, panchromatic pixel 2, panchromatic pixel Color pixel 3, panchromatic pixel 4, panchromatic pixel 5, panchromatic pixel 6, panchromatic pixel 7, panchromatic pixel 8, panchromatic pixel 9.
  • each pixel includes two sub-pixels arranged left and right, and each sub-pixel corresponds to a photosensitive element.
  • panchromatic pixel 1 includes two sub-pixels arranged on the left and right, and the phase information of these two sub-pixels is L1 and R1 respectively;
  • panchromatic pixel 2 includes two sub-pixels arranged on the left and right, and the phase information of these two sub-pixels is respectively L2 , R2;
  • panchromatic pixel 3 includes two sub-pixels arranged left and right L3, R3, the phase information of these two sub-pixels are respectively;
  • panchromatic pixel 4 includes two sub-pixels arranged left and right, the phase information of these two sub-pixels are respectively are L4 and R4;
  • panchromatic pixel 5 includes two sub-pixels arranged left and right, and the phase information of these two sub-pixels is respectively L5 and R5;
  • panchromatic pixel 6 includes two sub-pixels arranged left and right, and the phase information of these two sub-pixels is The information is L6 and R6 respectively;
  • the panchromatic pixel 7 includes two sub-pixels arranged
  • phase information of multiple groups of adjacent three sub-pixels is acquired along the second direction.
  • the phase information of the six groups of adjacent three sub-pixels are: L1, L4 and L7, L2, L5 and L8, L3, L6 and L9; R1, R4 and R7, R2, R5 and R8, R3, R6 and R9 .
  • the six groups of phase information of the three adjacent sub-pixels are respectively combined to generate six groups of second combined phase information.
  • L1, L4 and L7 are combined to generate second combined phase information L 1
  • L2, L5 and L8 are combined to generate second combined phase information L 2
  • L3, L6 and L9 are combined to generate second combined phase information L 3
  • R1, R4 and R7 are combined to generate second combined phase information R 1
  • R2, R5 and R8 are combined to generate second combined phase information R 2
  • R3, R6 and R9 are combined to generate second combined phase information R 3 .
  • Operation 1480 Generate a phase array of a second size corresponding to the pixel array according to multiple sets of second combined phase information, where the size of the phase array of the second size is 2 ⁇ 1 pixel.
  • a phase array of a second size corresponding to the pixel array can be generated according to the multiple sets of second combined phase information.
  • the target pixel group is a panchromatic pixel group in the pixel array
  • multiple sets of phase information of three adjacent sub-pixels are combined to generate 6 sets of second combined phase information.
  • the 6 sets of first combined phase information may be directly output as a phase array of the first size corresponding to the panchromatic pixel array. That is, L 1 , R 1 , L 2 , R 2 , L 3 , and R 3 are arranged in sequence to generate a phase array of the first size.
  • the size of the phase array of the first size is equivalent to the size of 6 ⁇ 1 pixels arranged in the array.
  • the conversion processing may be processing such as correcting the six sets of second combined phase information, which is not limited in this application.
  • the phase arrays equivalent to 6 ⁇ 1 pixels can be combined into 2 ⁇ 1 pixel phase arrays.
  • the present application does not limit the specific size of the combined phase array.
  • the pixel size refers to the area size of a pixel, and the area size is related to the length and width of the pixel.
  • a pixel is the smallest photosensitive unit on a photosensitive device (CCD or CMOS) of a digital camera.
  • the size of the 2 ⁇ 1 pixels arranged in the array is: the length is 2 ⁇ 0.0778mm, and the width is 1 ⁇ 0.0778mm.
  • the size of the full-scale phased array is 2 ⁇ 0.0778 mm in length and 1 ⁇ 0.0778 mm in width.
  • the pixels may not be rectangles with equal length and width, and the pixels may also have other heterogeneous structures, which are not limited in this application.
  • the above method is also used to generate their respective phase arrays of the second size. Based on all the phase arrays of the second size, the phase information of the pixel array is obtained.
  • the phase array can be input into the ISP, and the phase difference of the pixel array can be calculated by the ISP based on the phase array. Then, calculate the defocus distance based on the phase difference, and calculate the DAC code value corresponding to the defocus distance. Finally, the code value is converted into a driving current by the driver IC of the motor (VCM), and the motor drives the lens to move to the clear position.
  • VCM driver IC of the motor
  • the phase information of the panchromatic pixel group can be used to achieve phase focusing (PDAF).
  • the color pixel can also assist the panchromatic pixel to obtain accurate phase information in the scene with weaker light, so at this time, the phase information of the color pixel group and the panchromatic pixel group can also be used to Phase focusing (PDAF) can be achieved, and phase focusing (PDAF) can also be achieved using only the phase information of the panchromatic pixel group.
  • the color pixel group and the panchromatic pixel group or the panchromatic pixel group in the pixel array are used as the target pixel group, and for each target pixel group, the phase information of the sub-pixels is combined to a certain extent by using the second size output mode, The accuracy of the output phase information is improved, and the signal-to-noise ratio of the phase information is improved. Finally, focusing accuracy can be improved by performing phase focusing based on the phase array of the second size corresponding to the pixel array.
  • a phase array of a second size corresponding to the pixel array is generated, including:
  • a second-sized phase array of the pixel array in the first direction is generated according to the target phase information.
  • three adjacent second combining phase information are determined from the second combining phase information. Specifically, it is determined whether the sub-pixels used to generate the second combined phase information are at the same position in the pixel; if yes, the three second combined phase information are determined to be three adjacent second combined phase information. Three adjacent second combined phase information are combined again to generate target phase information. By outputting the target phase information, a second-sized phase array of the pixel array in the first direction is generated.
  • step 3 three adjacent second combined phase information are determined.
  • the sub-pixels used to generate the second combined phase information L1 are the sub-pixels of the left half of panchromatic pixel 1, panchromatic pixel 4 and panchromatic pixel 7, and the sub-pixels used to generate the second combined phase information L2
  • the pixels are the sub-pixels of panchromatic pixel 2, panchromatic pixel 5 and panchromatic pixel 8, and the subpixels used to generate the second combination phase information L3 are panchromatic pixel 3, panchromatic pixel 6 and panchromatic pixel 8.
  • the second combined phase information L 1 , the second combined phase information L 2 and the second combined phase information L 3 are three adjacent second combined phase information.
  • the second combined phase information R 1 , the second combined phase information R 2 , and the second combined phase information R 3 are all adjacent three second combined phase information.
  • Three adjacent second combined phase information are combined again to generate target phase information. That is, the second combined phase information L 1 , the second combined phase information L 2 and the second combined phase information L 3 are combined again to generate target phase information. Similarly, the second combined phase information R 1 , the second combined phase information R 2 and the second combined phase information R 3 are combined again to generate target phase information.
  • Outputting all the target phase information generates a second-sized phase array of the pixel array in the first direction.
  • the panchromatic pixels when the light intensity of the current shooting scene is greater than the third preset threshold and less than or equal to the second preset threshold, because the light intensity at this time is weaker, the panchromatic pixels can receive More light information, therefore, the color pixel group and panchromatic pixel group or panchromatic pixel group in the pixel array are used as the target pixel group, and for each target pixel group, the second size output mode is used to convert the phase information of the sub-pixel Two times of merging are performed to improve the accuracy of the output phase information and improve the signal-to-noise ratio of the phase information. Finally, focusing accuracy can be improved by performing phase focusing based on the phase array of the second size corresponding to the pixel array.
  • a second-size phase array of the pixel array in the first direction is generated, including:
  • the third phase weight corresponding to the color pixel group and the fourth phase weight corresponding to the panchromatic pixel group according to the light intensity of the current shooting scene; wherein, the third phase weight corresponding to the color pixel group is different under different light intensities, and the full color pixel group corresponds to the third phase weight.
  • the fourth phase weights corresponding to the color pixel groups are different under different light intensities;
  • a second-sized phase array of the pixel array in the second direction is generated.
  • the determined target pixel group includes a color pixel group and a panchromatic pixel group
  • weights between different pixel groups may be considered.
  • the third phase weight corresponding to the color pixel group and the fourth phase weight corresponding to the panchromatic pixel group can be determined according to the light intensity of the current shooting scene.
  • the third phase weights corresponding to the color pixel groups under different light intensities are different
  • the fourth phase weights corresponding to the panchromatic pixel groups under different light intensities are different.
  • the third phase weight corresponding to the color pixel group is 40%, among which, the phase weight of the green pixel group is 20%, the phase weight of the red pixel group is 10%, and the blue pixel group is 10%.
  • the phase weight of the pixel group is 10%.
  • the fourth phase weight corresponding to the panchromatic pixel group is 60%, which is not limited in this application.
  • a second-sized phase array of the pixel array in the first direction can be generated. For example, for this pixel array, based on the target phase information of the first red pixel group and its phase weight of 10%, the target phase information of the second red pixel group and its phase weight of 10%, the target phase of the first blue pixel group Information and phase weight 10%, target phase information and phase weight 10% for the second blue pixel group, and target phase information and phase weight 20% for each green pixel group, and target phase information for each panchromatic pixel group and the phase weight of 60%, and calculate the phase information of the pixel array in the first direction by summing together, that is, the phase array of the second size is obtained.
  • the target pixel group when performing phase focusing, if it is determined that the target pixel group includes a color pixel group and a panchromatic pixel group, then it can be based on the target phase information of the color pixel group and its third phase weight, and the target pixel group of the panchromatic pixel group.
  • the phase information and its fourth phase weight generate a second dimension phase array of pixel arrays.
  • the phase array of the second size of the pixel array is jointly generated, which can improve the comprehensiveness of the phase information.
  • the phase weights of the target phase information of the color pixel group and the panchromatic pixel group are different under different light intensities. In this way, the accuracy of the phase information can be improved by adjusting the weights under different light intensities.
  • At least two photosensitive elements corresponding to each target pixel in the target pixel group are arranged along the first direction; if the light intensity of the current shooting scene is less than or equal to the third preset threshold value, the phase information output mode , which outputs a phase array corresponding to the pixel array, consisting of:
  • This embodiment is a specific implementation operation of outputting the phase array corresponding to the pixel array according to the third size output mode when the light intensity of the current shooting scene is less than or equal to the third preset threshold. Wherein, at this time, it is in a dark night or in an environment where the light intensity is less than or equal to 50 lux. As shown in FIG. 16 , firstly, two panchromatic pixel groups adjacent along the second diagonal direction are determined from the pixel array as target pixel groups for calculating phase information.
  • the panchromatic pixels can capture more light information in an extremely dark scene, so this
  • the phase information of the panchromatic pixel group can be used to achieve phase-phase focusing (PDAF). Therefore, when outputting the phase array corresponding to the pixel array according to the third size output mode, the panchromatic pixel group can be selected as the target pixel group.
  • the phase information of some panchromatic pixel groups in the pixel array can be used to achieve phase focusing, or part of the panchromatic pixel groups in the partial panchromatic pixel group can be used to achieve phase focusing.
  • Phase focusing is implemented, which is not limited in this application.
  • phase information of some pixel groups can be used for phase focusing, or only the phase information of some pixels in some pixel groups can be used for phase focusing, so the data volume of the output phase information is reduced, thereby improving improve the efficiency of phase focusing.
  • phase information of three adjacent groups of three sub-pixels is acquired from the panchromatic pixel group along a second direction; wherein, the second direction and the first direction are perpendicular to each other;
  • each pixel includes at least two sub-pixels arranged in an array, and each sub-pixel corresponds to a photosensitive element.
  • the two sub-pixels may be arranged up and down or left and right, which is not limited in this application.
  • phase information of multiple groups of adjacent three sub-pixels is acquired along the second direction. The phase information of multiple sets of adjacent three sub-pixels is then combined to generate third combined phase information.
  • the phase information of multiple groups of adjacent three sub-pixels can be obtained along the second direction, and 12 groups of phase information can be obtained.
  • the phase information of the 12 groups of adjacent three sub-pixels are respectively combined to generate 12 groups of third combined phase information.
  • a pixel array may include 2 red pixel groups, 4 green pixel groups, 2 blue pixel groups and 8 panchromatic pixel groups. Assuming that all panchromatic pixel groups in the pixel array are used as target pixel groups, then for the 8 panchromatic pixel groups included in the pixel array, the phase information of each pixel group is sequentially calculated. For example, the phase information of the panchromatic pixel group is calculated for the panchromatic pixel group.
  • the panchromatic pixel group includes 9 panchromatic pixels arranged in a 3 ⁇ 3 array, which are sequentially numbered as panchromatic pixel 1, panchromatic pixel 2, panchromatic pixel Color pixel 3, panchromatic pixel 4, panchromatic pixel 5, panchromatic pixel 6, panchromatic pixel 7, panchromatic pixel 8, panchromatic pixel 9.
  • each pixel includes two sub-pixels arranged left and right, and each sub-pixel corresponds to a photosensitive element.
  • panchromatic pixel 1 includes two sub-pixels arranged on the left and right, and the phase information of these two sub-pixels is L1 and R1 respectively;
  • panchromatic pixel 2 includes two sub-pixels arranged on the left and right, and the phase information of these two sub-pixels is respectively L2 , R2;
  • panchromatic pixel 3 includes two sub-pixels arranged left and right L3, R3, the phase information of these two sub-pixels are respectively;
  • panchromatic pixel 4 includes two sub-pixels arranged left and right, the phase information of these two sub-pixels are respectively are L4 and R4;
  • panchromatic pixel 5 includes two sub-pixels arranged left and right, and the phase information of these two sub-pixels is respectively L5 and R5;
  • panchromatic pixel 6 includes two sub-pixels arranged left and right, and the phase information of these two sub-pixels is The information is L6 and R6 respectively;
  • the panchromatic pixel 7 includes two sub-pixels arranged
  • phase information of multiple groups of adjacent three sub-pixels is acquired along the second direction.
  • the phase information of the 12 groups of adjacent three sub-pixels are respectively: L1, L4 and L7, L2, L5 and L8, L3, L6 and L9 in the first panchromatic pixel group; R1, R4 and R7, R2, R5 and R8, R3, R6 and R9; L1, L4 and L7, L2, L5 and L8, L3, L6 and L9 in the second panchromatic pixel group; R1, R4 and R7, R2, R5 and R8, R3 , R6 and R9.
  • the 12 sets of phase information of the three adjacent sub-pixels are respectively combined to generate 12 sets of third combined phase information.
  • L1, L4 and L7 in the first panchromatic pixel group are combined to generate the third combined phase information L1
  • L2, L5 and L8 are combined to generate the third combined phase information L 2.
  • L3, L6 and L9 are combined to generate third combined phase information L 3 .
  • R1, R4 and R7 are combined to generate third combined phase information R 1 , R2, R5 and R8 are combined to generate third combined phase information R 2 , and R3, R6 and R9 are combined to generate third combined phase information R 3 .
  • a third-sized phase array corresponding to the pixel array is generated, and the size of the third-sized phase array is the size of 2 ⁇ 1 pixels arranged in the array.
  • a phase array of a third size corresponding to the pixel array may be generated according to the third combined phase information. Specifically, if the target pixel group is two adjacent panchromatic pixel groups along the second diagonal direction in the pixel array, the phase information of the three adjacent sub-pixels is combined to generate 12 groups of third combined phases information. Then, the 12 sets of third combined phase information may be directly output as a phase array of a third size corresponding to the panchromatic pixel array.
  • L 1 , R 1 , L 2 , R 2 , L 3 , R 3 corresponding to the first panchromatic pixel group, and L 1 , R 1 , L 2 , R corresponding to the second panchromatic pixel group 2 , L 3 , and R 3 are arranged in sequence to generate a third-size phase array.
  • the size of the phase array of the third size is equivalent to the size of 6 ⁇ 2 pixels arranged in the array.
  • the conversion processing may be processing such as performing correction on the 12 sets of third combined phase information, which is not limited in this application.
  • L 1 , L 2 , L 3 corresponding to the first panchromatic pixel group and L 1 , L 2 , L corresponding to the second panchromatic pixel group 3 Merge to generate the target phase information on the left
  • merge R 1 , R 2 , R 3 corresponding to the first panchromatic pixel group, and R 1 , R 2 , R 3 corresponding to the second panchromatic pixel group Generate the target phase information on the right.
  • phase array equivalent to 6 ⁇ 2 pixels is merged into a phase array with a size of 2 ⁇ 1 pixels.
  • the present application does not limit the specific size of the combined phase array.
  • the pixel size refers to the area size of a pixel, and the area size is related to the length and width of the pixel.
  • a pixel is the smallest photosensitive unit on a photosensitive device (CCD or CMOS) of a digital camera.
  • the size of the 2 ⁇ 1 pixels arranged in the array is: the length is 2 ⁇ 0.0778mm, and the width is 1 ⁇ 0.0778mm.
  • the size of the full-scale phased array is 2 ⁇ 0.0778 mm in length and 1 ⁇ 0.0778 mm in width.
  • the pixels may not be rectangles with equal length and width, and the pixels may also have other heterogeneous structures, which are not limited in this application.
  • the above method is also used to generate their own third-size phase arrays. Based on all the phase arrays of the third size, the phase information of the pixel array is obtained.
  • the phase array can be input into the ISP, and the phase difference of the pixel array can be calculated by the ISP based on the phase array. Then, calculate the defocus distance based on the phase difference, and calculate the DAC code value corresponding to the defocus distance. Finally, the code value is converted into a driving current by the driver IC of the motor (VCM), and the motor drives the lens to move to the clear position.
  • VCM driver IC of the motor
  • the phase information collected by the color pixel group is not very accurate, and some Colored pixel groups may not have acquired phase information. Therefore, the two panchromatic pixel groups adjacent along the second diagonal direction in the pixel array are used as target pixel groups, and for these two panchromatic pixel groups, the phase information of the sub-pixel is processed by using the third size output mode Merge, improve the accuracy of the output phase information, and improve the signal-to-noise ratio of the phase information. Ultimately, focusing accuracy can be improved by performing phase focusing based on the third-size phase array corresponding to the pixel array.
  • a phase array of a third size corresponding to the pixel array is generated, including:
  • a third-sized phase array of the pixel array in the first direction is generated according to the target phase information.
  • Outputting all the target phase information generates a third-size phase array of the pixel array in the first direction.
  • the phase information collected by the color pixel group is not very accurate, and some Colored pixel groups may not have acquired phase information. Therefore, the two panchromatic pixel groups adjacent along the second diagonal direction in the pixel array are used as target pixel groups, and for these two panchromatic pixel groups, the phase information of the sub-pixel is processed by using the third size output mode Combine to the greatest extent, improve the accuracy of the output phase information, and improve the signal-to-noise ratio of the phase information. Ultimately, focusing accuracy can be improved by performing phase focusing based on the third-size phase array corresponding to the pixel array.
  • phase information output mode before outputting the phase array corresponding to the pixel array according to the phase information output mode, it also includes:
  • phase information output mode output the phase array corresponding to the pixel array, including:
  • phase information output mode a phase array corresponding to the target pixel array is output.
  • the area of the image sensor is large, and the pixel array of the smallest unit included is tens of thousands. If all the phase information is extracted from the image sensor for phase focusing, the amount of phase information data is too large, resulting in an excessive amount of actual calculation. Large, therefore, waste system resources and reduce image processing speed.
  • the pixel array used for focus control can be extracted from multiple pixel arrays in the image sensor in advance according to the preset extraction ratio and preset extraction position. For example, extraction may be performed at a preset extraction ratio of 3%, that is, one pixel array is extracted from 32 pixel arrays as a pixel array for focus control. And the extracted pixel arrays are arranged as vertices of the hexagon, that is, the extracted pixel arrays form a hexagon. In this way, phase information can be uniformly obtained.
  • the present application does not limit the preset extraction ratio and preset extraction position.
  • the phase information output mode adapted to the light intensity of the current shooting scene can be determined.
  • the pixel array used for focus control according to the phase information output mode, output the phase array corresponding to the pixel array; wherein, the phase array includes the phase information corresponding to the target pixel in the pixel array.
  • the phase difference of the pixel array is calculated based on the phase array, and focus control is performed according to the phase difference.
  • the target pixel array is determined from multiple pixel arrays in the image sensor according to the preset extraction ratio and preset extraction position of the pixel array used for focus control. In this way, instead of using all the phase information in the image sensor for focusing, only the phase information corresponding to the target pixel array is used for focusing, which greatly reduces the amount of data and improves the speed of image processing.
  • the target pixel array is determined from multiple pixel arrays in the image sensor, so that the phase information can be obtained more uniformly. Ultimately, improving the accuracy of phase focusing.
  • a focus control method further comprising:
  • the first preset threshold, the second preset threshold and the third preset threshold of the light intensity are determined according to the exposure parameter and the size of the pixel.
  • the threshold of the light intensity when determining the threshold of the light intensity, it may be determined according to the exposure parameters and the size of the pixel.
  • the exposure parameters include shutter speed, lens aperture size and sensitivity (ISO, light sensitivity ordinance).
  • the first preset threshold value, the second preset threshold value and the third preset threshold value of the light intensity are determined according to the exposure parameters and the size of the pixel, and the light intensity range is divided into four ranges, thus, in each Within the light intensity range, the phase information output mode corresponding to the light intensity range is adopted, thereby achieving more refined phase information calculation.
  • a focus control device 1700 is provided, which is applied to an image sensor, and the device includes:
  • the phase information output mode determination module 1720 is configured to determine a phase information output mode adapted to the light intensity of the current shooting scene according to the light intensity of the current shooting scene; wherein, in different phase information output modes, the output phase array of different sizes;
  • the phase array output module 1740 is configured to output the phase array corresponding to the pixel array according to the phase information output mode; wherein, the phase array includes phase information corresponding to the target pixel in the pixel array;
  • the focus control module 1760 is configured to calculate the phase difference of the pixel array based on the phase array, and perform focus control according to the phase difference.
  • the phase information output mode determination module 1720 is also used to determine the target light intensity range to which the light intensity of the current shooting scene belongs; wherein, different light intensity ranges correspond to different phase information output modes;
  • a phase information output mode suitable for the light intensity of the current shooting scene is determined.
  • the phase information output mode includes a full-size output mode and a first-size output mode, and the size of the phase array in the full-size output mode is larger than the size of the phase array in the first-size output mode;
  • the phase information output mode determination module 1720 is also used to determine that the phase information output mode adapted to the light intensity of the current shooting scene is a full-size output mode if the light intensity of the current shooting scene is greater than the first preset threshold; If the light intensity of the scene is greater than the second preset threshold and less than or equal to the first preset threshold, it is determined that the phase information output mode adapted to the light intensity of the current shooting scene is the first size output mode; the first preset threshold is greater than Second preset threshold.
  • the phase array output module 1740 includes:
  • a target pixel group determining unit 1742 configured to use the color pixel group in the pixel array as a target pixel group; the target pixel group includes target pixels;
  • phase information acquiring unit 1744 configured to acquire phase information of sub-pixels of the target pixel for each target pixel group
  • the full-size phase array generation unit 1746 is configured to generate a full-size phase array corresponding to the pixel array according to the phase information of the sub-pixels of the target pixel; the size of the full-size phase array is 6 ⁇ 3 pixels arranged in the array.
  • phase array output module 1740 includes:
  • the target pixel group determining unit is further configured to use at least one of the color pixel group and the panchromatic pixel group in the pixel array as the target pixel group;
  • the phase information acquiring unit is also used to acquire the phase information of multiple sets of adjacent two sub-pixels along the second direction with one sub-pixel as the sliding window step size for each target pixel group; wherein, the second direction is the same as the first direction perpendicular to each other;
  • the first-size phase array generation unit is used to combine multiple sets of phase information of two adjacent sub-pixels to generate multiple sets of first combined phase information; according to multiple sets of first combined phase information, generate the first combined phase information corresponding to the pixel array.
  • a size phase array, the size of the first size phase array is the size of 4 ⁇ 2 pixels arranged in the array.
  • the first-size phase array generating unit is further used to combine two adjacent first combined phase information to generate target phase information; wherein, it is used to generate two adjacent first combined phase information
  • the sub-pixels of the phase information are at the same position in the target pixel; according to the target phase information, a phase array of the first size of the pixel array in the first direction is generated.
  • the first size phase array generation unit is further configured to determine the first phase weight corresponding to the color pixel group and The second phase weights corresponding to the panchromatic pixel groups; wherein, the first phase weights corresponding to the color pixel groups under different light intensities are different, and the second phase weights corresponding to the panchromatic pixel groups under different light intensities are different; Based on the target phase information and the first phase weight of the color pixel group, and the target phase information and the second phase weight of the panchromatic pixel group, a first-sized phase array of the pixel array in the first direction is generated.
  • the phase information output mode further includes a second size output mode and a third size output mode; wherein, the size of the phase array in the first size output mode is larger than the size of the phase array in the second size output mode; the second The size of the phase array in the second-size output mode is greater than the size of the phase array in the third-size output mode;
  • the phase information output mode determination module 1720 is further configured to determine the phase information output adapted to the light intensity of the current shooting scene if the light intensity of the current shooting scene is greater than the third preset threshold and less than or equal to the second preset threshold The mode is the second size output mode; if the light intensity of the current shooting scene is less than or equal to the third preset threshold, it is determined that the phase information output mode adapted to the light intensity of the current shooting scene is the third size output mode; the second preset The threshold is set to be greater than the third preset threshold.
  • phase array output module 1740 includes:
  • the target pixel group determining unit is further configured to use the color pixel group and the panchromatic pixel group in the pixel array as the target pixel group, or use the panchromatic pixel group as the target pixel group;
  • the phase information acquiring unit is further configured to acquire phase information of multiple sets of adjacent three sub-pixels along the second direction for each target pixel group;
  • the second size phase array generation unit is used to combine the phase information of multiple sets of adjacent three sub-pixels to generate multiple sets of second combined phase information; according to multiple sets of second combined phase information, generate the first corresponding to the pixel array.
  • the second-size phase array generating unit is further used to combine three adjacent second combined phase information to generate target phase information; wherein, it is used to generate three adjacent second combined phase information
  • the sub-pixels of the phase information are at the same position in the target pixel; according to the target phase information, a second-sized phase array of the pixel array in the first direction is generated.
  • the second-size phase array generation unit is further configured to determine the third phase weight corresponding to the color pixel group and The fourth phase weight corresponding to the panchromatic pixel group; wherein, the third phase weight corresponding to the color pixel group under different light intensities is different, and the fourth phase weight corresponding to the panchromatic pixel group under different light intensities is different; Based on the target phase information and the third phase weight of the color pixel group, and the target phase information and the fourth phase weight of the panchromatic pixel group, a second-sized phase array of the pixel array in the second direction is generated.
  • the phase array output module 1740 includes:
  • the target pixel group determining unit is further configured to use two panchromatic pixel groups adjacent along the second diagonal direction in the pixel array as target pixel groups;
  • the phase information acquiring unit is further configured to, for each panchromatic pixel group in the target pixel group, acquire phase information of multiple sets of adjacent three sub-pixels from the panchromatic pixel group along a second direction; wherein, the second direction is the same as The first directions are perpendicular to each other;
  • the third size phase array generation unit is used to combine the phase information of multiple sets of adjacent three sub-pixels to generate multiple sets of third combined phase information; according to multiple sets of third combined phase information, generate the first corresponding to the pixel array
  • the three-dimensional phase array, the size of the third-size phase array is the size of 2 ⁇ 1 pixels arranged in the array.
  • the third-size phase array generation unit is further configured to combine the six second combined phase information again to generate target phase information; wherein, the sub-pixels used to generate the six second combined phase information are in The target pixel is at the same sub-position; according to the target phase information, a third-size phase array of the pixel array in the first direction is generated.
  • a focus control device further comprising:
  • a target pixel array determining module configured to determine the target pixel array from multiple pixel arrays in the image sensor according to the preset extraction ratio and preset extraction position of the pixel array used for focus control;
  • the phase array output module 1740 is further configured to output the phase array corresponding to the target pixel array according to the phase information output mode.
  • a focus control device further comprising:
  • the threshold determination module is used to determine the first preset threshold, the second preset threshold and the third preset threshold of the light intensity according to the exposure parameter and the size of the pixel.
  • each module in the above-mentioned focus control device is only for illustration. In other embodiments, the focus control device can be divided into different modules according to needs, so as to complete all or part of the functions of the above-mentioned focus control device.
  • Each module in the above-mentioned focusing control device can be fully or partially realized by software, hardware and a combination thereof.
  • the above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
  • Fig. 19 is a schematic diagram of the internal structure of an electronic device in one embodiment.
  • the electronic device can be any terminal device such as mobile phone, tablet computer, notebook computer, desktop computer, PDA (Personal Digital Assistant, personal digital assistant), POS (Point of Sales, sales terminal), vehicle-mounted computer, wearable device, etc.
  • the electronic device includes a processor and memory connected by a system bus.
  • the processor may include one or more processing units.
  • the processor can be a CPU (Central Processing Unit, central processing unit) or a DSP (Digital Signal Processing, digital signal processor), etc.
  • the memory may include non-volatile storage media and internal memory. Nonvolatile storage media store operating systems and computer programs.
  • the computer program can be executed by a processor, so as to implement a focus control method provided in the following embodiments.
  • the internal memory provides a high-speed running environment for the operating system computer program in the non-volatile storage medium.
  • each module in the focus control device provided in the embodiment of the present application may be in the form of a computer program.
  • the computer program can run on the electronic device.
  • the program modules constituted by the computer program can be stored in the memory of the electronic device.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform operations of the focus control method.
  • the embodiment of the present application also provides a computer program product including instructions, which, when running on a computer, causes the computer to execute the focusing control method.
  • Non-volatile memory can include ROM (Read-Only Memory, read-only memory), PROM (Programmable Read-only Memory, programmable read-only memory), EPROM (Erasable Programmable Read-Only Memory, erasable programmable read-only memory) Memory), EEPROM (Electrically Erasable Programmable Read-only Memory, Electrically Erasable Programmable Read-only Memory) or flash memory.
  • Volatile memory can include RAM (Random Access Memory, Random Access Memory), which is used as external cache memory.
  • RAM is available in various forms, such as SRAM (Static Random Access Memory, static random access memory), DRAM (Dynamic Random Access Memory, dynamic random access memory), SDRAM (Synchronous Dynamic Random Access Memory , synchronous dynamic random access memory), double data rate DDR SDRAM (Double Data Rate Synchronous Dynamic Random Access memory, double data rate synchronous dynamic random access memory), ESDRAM (Enhanced Synchronous Dynamic Random Access memory, enhanced synchronous dynamic random access memory access memory), SLDRAM (Sync Link Dynamic Random Access Memory, synchronous link dynamic random access memory), RDRAM (Rambus Dynamic Random Access Memory, bus dynamic random access memory), DRDRAM (Direct Rambus Dynamic Random Access Memory, interface dynamic random access memory) memory).
  • SRAM Static Random Access Memory, static random access memory
  • DRAM Dynanamic Random Access Memory, dynamic random access memory
  • SDRAM Synchronous Dynamic Random Access Memory , synchronous dynamic random access memory
  • double data rate DDR SDRAM Double Data Rate Synchronous Dynamic Random Access memory, double

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Color Television Image Signal Generators (AREA)
  • Studio Devices (AREA)
  • Focusing (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

本申请涉及一种对焦控制方法和装置、图像传感器、电子设备、计算机可读存储介质,应用于图像传感器,该方法包括:根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式(820);其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同。按照相位信息输出模式,输出与像素阵列对应的相位阵列;其中,相位阵列包括像素阵列中目标像素对应的相位信息(840)。基于相位阵列计算像素阵列的相位差,并根据相位差进行对焦控制(860)。

Description

对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质
本申请要求于2021年11月22日提交中国专利局,申请号为202111383861.9,发明名称为“对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质。
背景技术
随着电子设备的发展,越来越多的用户通过电子设备拍摄图像。为了保证拍摄的图像清晰,通常需要对电子设备的摄像模组进行对焦,即通过调节镜头与图像传感器之间的距离,以使拍摄对象成像在焦平面上。传统的对焦方式包括相位检测自动对焦(英文:phase detection auto focus;简称:PDAF)。
传统的相位检测自动对焦,主要是基于RGB像素阵列来计算相位差,然后再基于相位差来控制马达,进而由马达驱动镜头移动至合适的位置进行对焦,以使拍摄对象成像在焦平面上。
然而,由于RGB像素阵列在不同的光线强度下的感光度不同,因此,在部分光线强度下,通过RGB像素阵列所计算出的相位差的准确性较低,进而导致对焦的准确性也大幅降低。
发明内容
本申请实施例提供了一种对焦控制方法、装置、电子设备、图像传感器、计算机可读存储介质,可以提高对焦的准确性。
一方面,提供了一种图像传感器,所述图像传感器包括像素阵列及滤光片阵列,所述滤光片阵列包括最小重复单元,所述最小重复单元包括多个滤光片组,所述滤光片组包括彩色滤光片和全色滤光片;所述彩色滤光片具有比所述全色滤光片更窄的光谱响应,所述彩色滤光片和所述全色滤光片均包括阵列排布的9个子滤光片;
其中,所述像素阵列包括多个全色像素组和多个彩色像素组,每个所述全色像素组对应所述全色滤光片,每个所述彩色像素组对应所述彩色滤光片;所述全色像素组和所述彩色像素组均包括9个像素,所述像素阵列的像素与所述滤光片阵列的子滤光片对应设置,且每个像素包括阵列排布的至少两个子像素,每个子像素对应一个感光元件。
另一方面,提供了一种对焦控制方法,应用于如上所述的图像传感器,所述方法包括:
根据当前拍摄场景的光线强度,确定与所述当前拍摄场景的光线强度适配的相位信息输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同;
按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列;其中,所述相位阵列包括所述像素阵列中目标像素对应的相位信息;
基于所述相位阵列计算所述像素阵列的相位差,并根据所述相位差进行对焦控制。
另一方面,提供了一种对焦控制装置,应用于如上所述的图像传感器,所述装置包括:
相位信息输出模式确定模块,用于根据当前拍摄场景的光线强度,确定与所述当前拍摄场景的光线强度适配的相位信息输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同;
相位阵列输出模块,用于按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列;其中,所述相位阵列包括所述像素阵列中目标像素对应的相位信息;
对焦控制模块,用于基于所述相位阵列计算所述像素阵列的相位差,并根据所述相位差进行对焦控制。
另一方面,提供了一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如上所述的对焦控制方法的操作。
另一方面,提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的方法的操作。
另一方面,提供了一种计算机程序产品,包括计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行时实现如上所述的对焦控制方法的操作。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中电子设备的结构示意图;
图2为相位检测自动对焦的原理示意图;
图3为在图像传感器包括的像素点中成对地设置相位检测像素点的示意图;
图4为一个实施例中图像传感器的分解示意图;
图5为一个实施例中像素阵列和读出电路的连接示意图;
图6为一个实施例中最小重复单元的排布方式示意图;
图7为另一个实施例中最小重复单元的排布方式示意图;
图8为一个实施例中对焦控制方法的流程图;
图9为一个实施例中根据目标光线强度范围,确定与当前拍摄场景的光线强度适配的相位信息输出模式方法的流程图;
图10为一个实施例中生成全尺寸相位阵列方法的流程图;
图11为一个实施例中生成全尺寸相位阵列的示意图;
图12为一个实施例中生成第一尺寸相位阵列方法的流程图;
图13为另一个实施例中生成第一尺寸相位阵列的示意图;
图14为一个实施例中生成第二尺寸相位阵列方法的流程图;
图15为一个实施例中生成第二尺寸相位阵列的示意图;
图16为一个实施例中生成第三尺寸相位阵列的示意图;
图17为一个实施例中对焦控制装置的结构框图;
图18为图17中相位阵列输出模块的结构框图;
图19为一个实施例中电子设备的内部结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”、“第三”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一尺寸称为第二尺寸,且类似地,可将第二尺寸称为第一尺寸。第一尺寸和第二尺寸两者都是尺寸,但其不是同一尺寸。可以将第一预设阈值称为第二预设阈值,且类似地,可将第二预设阈值称为第一预设阈值。第一预设阈值和第二预设阈值两者都是预设阈值,但其不是同一预设阈值。
图1为一个实施例中对焦控制方法的应用环境示意图。如图1所示,该应用环境包括电子设备100。电子设备100包括图像传感器,图像传感器包括像素阵列,电子设备根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同;按照相位信息输出模式,输出与像素阵列对应的相位阵列;其中,相位阵列包括像素阵列对应的相位信息;基于相位阵列计算像素阵列的相位差,并根据相位差进行对焦控制。其中,电子设备可以是手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、穿戴式设备(智 能手环、智能手表、智能眼镜、智能手套、智能袜子、智能腰带等)、VR(virtual reality,虚拟现实)设备、智能家居、无人驾驶汽车等任意具有图像处理功能的终端设备。
其中,电子设备100包括相机20、处理器30和壳体40。相机20和处理器30均设置在壳体40内,壳体40还可用于安装终端100的供电装置、通信装置等功能模块,以使壳体40为功能模块提供防尘、防摔、防水等保护。相机20可以是前置相机、后置相机、侧置相机、屏下相机等,在此不做限制。相机20包括镜头及图像传感器21,相机20在拍摄图像时,光线穿过镜头并到达图像传感器21,图像传感器21用于将照射到图像传感器21上的光信号转化为电信号。
图2为相位检测自动对焦(phase detection auto focus,PDAF)的原理示意图。如图2所示,M1为成像设备处于合焦状态时,图像传感器所处的位置,其中,合焦状态指的是成功对焦的状态。若图像传感器位于M1位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在图像传感器上会聚,也即是,由物体W反射向镜头Lens的不同方向上的成像光线g在图像传感器上的同一位置处成像,此时,图像传感器成像清晰。
M2和M3为成像设备不处于合焦状态时,图像传感器所可能处于的位置,如图2所示,若图像传感器位于M2位置或M3位置时,由物体W反射向镜头Lens的不同方向上的成像光线g会在不同的位置成像。请参考图2,若图像传感器位于M2位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在位置A和位置B分别成像,若图像传感器位于M3位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在位置C和位置D分别成像,此时,图像传感器成像不清晰。
在PDAF技术中,可以获取从不同方向射入镜头的成像光线在图像传感器中所成的像在位置上的差异,例如,如图2所示,可以获取位置A和位置B的差异,或者,获取位置C和位置D的差异;在获取到从不同方向射入镜头的成像光线在图像传感器中所成的像在位置上的差异之后,可以根据该差异以及摄像机中镜头与图像传感器之间的几何关系,得到离焦距离,所谓离焦距离指的是图像传感器当前所处的位置与合焦状态时图像传感器所应该处于的位置的距离;成像设备可以根据得到的离焦距离进行对焦。
由此可知,合焦时,计算得到的PD值为0,反之算出的值越大,表示离合焦点的位置越远,值越小,表示离合焦点越近。采用PDAF对焦时,通过计算出PD值,再根据标定得到PD值与离焦距离之间的对应关系,可以求得离焦距离,然后根据离焦距离控制镜头移动达到合焦点,以实现对焦。
相关技术中,可以在图像传感器包括的像素点中成对地设置一些相位检测像素点,如图3所示,图像传感器中可以设置有相位检测像素点对(以下称为像素点对)A,像素点对B和像素点对C。其中,在每个像素点对中,一个相位检测像素点进行左侧遮挡(英文:Left Shield),另一个相位检测像素点进行右侧遮挡(英文:Right Shield)。
对于进行了左侧遮挡的相位检测像素点而言,射向该相位检测像素点的成像光束中仅有右侧的光束才能在该相位检测像素点的感光部分(也即是未被遮挡的部分)上成像,对于进行了右侧遮挡的相位检测像素点而言,射向该相位检测像素点的成像光束中仅有左侧的光束才能在该相位检测像素点的感光部分(也即是未被遮挡的部分)上成像。这样,就可以将成像光束分为左右两个部分,通过对比左右两部分成像光束所成的像,即可得到相位差。
在一个实施例中,进一步描述了图像传感器包括像素阵列及滤光片阵列,滤光片阵列包括最小重复单元,最小重复单元包括多个滤光片组,滤光片组包括彩色滤光片和全色滤光片,在滤光片组中彩色滤光片设置在第一对角线方向,全色滤光片设置在第二对角线方向,第一对角线方向与第二对角线方向不同;彩色滤光片具有比全色滤光片更窄的光谱响应,彩色滤光片和全色滤光片均包括阵列排布的9个子滤光片;
其中,像素阵列包括多个全色像素组和多个彩色像素组,每个全色像素组对应全色滤光片,每个彩色像素组对应彩色滤光片;全色像素组和彩色像素组均包括9个像素,像素阵列的像素与滤光片阵列的子滤光片对应设置,且每个像素包括阵列排布的至少两个子像素,每个子像素对应一个感光元件。
如图4所示,图像传感器21包括微透镜阵列22、滤光片阵列23、像素阵列24。
微透镜阵列22包括多个微透镜221,微透镜221、滤光片阵列23中的子滤光片和像素阵列24中的 像素一一对应设置,微透镜221用于将入射的光线进行聚集,聚集之后的光线会穿过对应的子滤光片,然后投射至像素上,被对应的像素接收,像素再将接收的光线转化成电信号。
滤光片阵列23包括多个最小重复单元231。最小重复单元231可包括多个滤光片组232。每个滤光片组232包括全色滤光片233和彩色滤光片234,彩色滤光片234具有比全色滤光片233的更窄的光谱响应。每个全色滤光片233中包括9个子滤光片2331,每个彩色滤光片234中包括9个子滤光片2341。在不同的滤光片组中还包括有不同的彩色滤光片234。
最小重复单元231中的滤光片组232的彩色滤光片234的透过的光线的波段对应的颜色包括颜色a、颜色b和/或颜色c。滤光片组232的彩色滤光片234的透过的光线的波段对应的颜色包括颜色a、颜色b和颜色c、或者颜色a、颜色b或颜色c、或者颜色a和颜色b、或者颜色b和颜色c、或者颜色a和颜色c。其中,颜色a为红色,颜色b为绿色,颜色c为蓝色,或者例如颜色a为品红色,颜色b为青色,颜色c为黄色等,在此不做限制。
在一个实施例中,彩色滤光片234的透过的光线的波段的宽度小于全色滤光片233透过的光线的波段的宽度,例如,彩色滤光片234的透过的光线的波段可对应红光的波段、绿光的波段、或蓝光的波段,全色滤光片233透过的光线的波段为所有可见光的波段,也即是说,彩色滤光片234仅允许特定颜色光线透光,而全色滤光片233可通过所有颜色的光线。当然,彩色滤光片234的透过的光线的波段还可对应其他色光的波段,如品红色光、紫色光、青色光、黄色光等,在此不作限制。
在一个实施例中,滤光片组232中彩色滤光片234的数量和全色滤光片233的数量的比例可以是1:3、1:1或3:1。例如,彩色滤光片234的数量和全色滤光片233的数量的比例为1:3,则彩色滤光片234的数量为1,全色滤光片233的数量为3,此时全色滤光片233数量较多,相较于传统的只有彩色滤光片的情况,在暗光下可以通过全色滤光片233获取到更多的相位信息,因此对焦质量更好;或者,彩色滤光片234的数量和全色滤光片233的数量的比例为1:1,则彩色滤光片234的数量为2,全色滤光片233的数量为2,此时既可以获得较好的色彩表现的同时,在暗光下可以通过全色滤光片233获取到更多的相位信息,因此对焦质量也较好;或者,彩色滤光片234的数量和全色滤光片233的数量的比例为3:1,则彩色滤光片234的数量为3,全色滤光片233的数量为1,此时可获得更好的色彩表现,且同理也能提高暗光下的对焦质量。
像素阵列24包括多个像素,像素阵列24的像素与滤光片阵列23的子滤光片对应设置。像素阵列24被配置成用于接收穿过滤光片阵列23的光线以生成电信号。
其中,像素阵列24被配置成用于接收穿过滤光片阵列23的光线以生成电信号,是指像素阵列24用于对穿过滤光片阵列23的被摄对象的给定集合的场景的光线进行光电转换,以生成电信号。被摄对象的给定集合的场景的光线用于生成图像数据。例如,被摄对象是建筑物,被摄对象的给定集合的场景是指该建筑物所处的场景,该场景中还可以包含其他对象。
在一个实施例中,像素阵列24可以是RGBW像素阵列,包括多个最小重复单元241,最小重复单元241包括多个像素组242,多个像素组242包括全色像素组243和彩色像素组244。每个全色像素组243中包括9个全色像素2431,每个彩色像素组244中包括9个彩色像素2441。每个全色像素2431对应全色滤光片233中的一个子滤光片2331,全色像素2431接收穿过对应的子滤光片2331的光线以生成电信号。每个彩色像素2441对应彩色滤光片234的一个子滤光片2341,彩色像素2441接收穿过对应的子滤光片2341的光线以生成电信号。每个像素包括阵列排布的至少两个子像素,每个子像素对应一个感光元件。即每个全色像素2431包括阵列排布的至少两个子像素2431a及2431b,每个子像素对应一个感光元件;每个彩色像素2441包括阵列排布的至少两个子像素2441a及2441b,每个子像素对应一个感光元件。这里,每个全色像素2431包括阵列排布的至少两个子像素,具体可以是包括阵列排布的两个子像素,或包括阵列排布的四个子像素,本申请对此不做限定。
本实施中的图像传感器21包括滤光片阵列23和像素阵列24,滤光片阵列23包括最小重复单元231,最小重复单元231包括多个滤光片组232,滤光片组包括全色滤光片233和彩色滤光片234,彩色滤光片234具有比全色滤光片233的更窄的光谱响应,在拍摄时可获取到更多的光量,从而无需调节拍摄参数,在不影响拍摄的稳定性的情况下,提高暗光下的对焦质量,暗光下对焦时,可兼顾稳定性和质量, 暗光下对焦的稳定性和质量均较高。并且,每个全色滤光片233中包括9个子滤光片2331,每个彩色滤光片234中包括9个子滤光片2341,像素阵列24包括多个全色像素2431和多个彩色像素2441,每个全色像素2431对应全色滤光片233的一个子滤光片2331,每个彩色像素2441对应彩色滤光片234的一个子滤光片2341,全色像素2431和彩色像素2441用于接收穿过对应的子滤光片的光线以生成电信号,在暗光下对焦时可将9个子滤光片对应的像素的相位信息合并输出,得到信噪比较高的相位信息,而在光线较为充足的场景下,可将每个子滤光片对应的像素的相位信息单独进行输出,从而得到分辨率和信噪比均较高的相位信息,从而能够适配不同的应用场景,并能够提高在各场景下的对焦质量。
在一个实施例中,如图4所示,滤光片阵列23中的最小重复单元231包括4个滤光片组232,并且4个滤光片组232呈矩阵排列。每个滤光片组232包括全色滤光片233和彩色滤光片234,每个全色滤光片233和每个彩色滤光片234均有9个子滤光片,则该滤光片组232共包括36个子滤光片。
同样的,像素阵列24包括多个最小重复单元241,与多个最小重复单元231对应。每个最小重复单元241包括4个像素组242,并且4个像素组242呈矩阵排列。每个像素组242对应一个滤光片组232。
如图5所示,读出电路25与像素阵列24电连接,用于控制像素阵列24的曝光以及像素点的像素值的读取和输出。读出电路25包括垂直驱动单元251、控制单元252、列处理单元253和水平驱动单元254。垂直驱动单元251包括移位寄存器和地址译码器。垂直驱动单元251包括读出扫描和复位扫描功能。控制单元252根据操作模式配置时序信号,利用多种时序信号来控制垂直驱动单元251、列处理单元253和水平驱动单元254协同工作。列处理单元253可以具有用于将模拟像素信号转换为数字格式的模数(A/D)转换功能。水平驱动单元254包括移位寄存器和地址译码器。水平驱动单元254顺序逐列扫描像素阵列24。
在一个实施例中,如图6所示,每个滤光片组232均包括彩色滤光片234和全色滤光片233,滤光片组232中的各个全色滤光片233设置在第一对角线方向D1,滤光片组232中的各个彩色滤光片234设置在第二对角线方向。第一对角线D1方向和第二对角线D2方向不同,能够兼顾色彩表现和暗光对焦质量。
第一对角线D1方向与第二对角线D2方向不同,具体可以是第一对角线D1方向与第二对角线D2方向不平行,或者,第一对角线D1方向与第二对角线D2方向垂直等。
在其他实施方式中,一个彩色滤光片234和一个全色滤光片233可位于第一对角线D1,另一个彩色滤光片234和另一个全色滤光片233可位于第二对角线D2。
在一个实施例中,每个像素包括阵列排布的至少两个子像素,每个子像素对应一个感光元件。其中,感光元件是一种能够将光信号转化为电信号的元件。例如,感光元件可为光电二极管。如图6所示,每个全色像素2331包括阵列排布的2个子像素d(即2个光电二极管PD(Left PhotoDiode、Right PhotoDiode)),每个彩色像素2341包括阵列排布的2个子像素d(即2个光电二极管PD(Left PhotoDiode、Right PhotoDiode))。
当然,每个全色像素2431也可以包括阵列排布的4个子像素d(即4个光电二极管PD(Up-Left PhotoDiode、Up-Right PhotoDiode、Down-Left PhotoDiode及Down-Right PhotoDiode)),每个彩色像素2441包括阵列排布的4个子像素d(即4个光电二极管PD(Up-Left PhotoDiode、Up-Right PhotoDiode、Down-Left PhotoDiode及Down-Right PhotoDiode))。本申请对此不做限定。
本申请实施例中,由于每个像素包括阵列排布的至少两个子像素,每个子像素对应一个感光元件。因此,就可以基于该至少两个子像素的相位信息,计算像素阵列的相位差。
在一个实施例中,如图6所示,滤光片阵列23中的最小重复单元231包括4个滤光片组232,并且4个滤光片组232呈矩阵排列。每个滤光片组232中包含2个全色滤光片233和2个彩色滤光片234。全色滤光片233中包括9个子滤光片2331,彩色滤光片234中包括9个子滤光片2341,则最小重复单元231为12行12列144个子滤光片,排布方式为:
Figure PCTCN2022120545-appb-000001
Figure PCTCN2022120545-appb-000002
其中,w表示全色子滤光片2331,a、b和c均表示彩色子滤光片2341。全色子滤光片2331指的是可滤除可见光波段之外的所有光线的子滤光片,彩色子滤光片2341包括红色子滤光片、绿色子滤光片、蓝色子滤光片、品红色子滤光片、青色子滤光片和黄色子滤光片。红色子滤光片为滤除红光之外的所有光线的子滤光片,绿色子滤光片为滤除绿光之外的所有光线的子滤光片,蓝色子滤光片为滤除蓝光之外的所有光线的子滤光片,品红色子滤光片为滤除品红色光之外的所有光线的子滤光片,青色色子滤光片为滤除青光之外的所有光线的子滤光片,黄色子滤光片为滤除黄光之外的所有光线的子滤光片。
a可以是红色子滤光片、绿色子滤光片、蓝色子滤光片、品红色子滤光片、青色子滤光片或黄色子滤光片,b可以是红色子滤光片、绿色子滤光片、蓝色子滤光片、品红色子滤光片、青色子滤光片或黄色子滤光片,c可以是红色子滤光片、绿色子滤光片、蓝色子滤光片、品红色子滤光片、青色子滤光片或黄色子滤光片。例如,b为红色子滤光片、a为绿色子滤光片、c为蓝色子滤光片;或者,c为红色子滤光片、a为绿色子滤光片、b为蓝色子滤光片;再例如,c为红色子滤光片、a为绿色子滤光片、b为蓝色子滤光片;或者,a为红色子滤光片、b为蓝色子滤光片、c为绿色子滤光片等,在此不作限制;再例如,b为品红色子滤光片、a为青色子滤光片、b为黄色子滤光片等。在其他实施方式中,彩色滤光片还可包括其他颜色的子滤光片,如橙色子滤光片、紫色子滤光片等,在此不作限制。
在一个实施例中,如图7所示,滤光片阵列23中的最小重复单元231包括4个滤光片组232,并且4个滤光片组232呈矩阵排列。每个滤光片组232均包括彩色滤光片234和全色滤光片233,滤光片组232中的各个彩色滤光片234设置在第一对角线D1方向,滤光片组232中的各个全色滤光片233设置在第二对角线D2方向。且像素阵列(图7未示,可参考图6)的像素与所述滤光片阵列的子滤光片对应设置,且每个像素包括阵列排布的至少两个子像素,每个子像素对应一个感光元件。
在一个实施例中,每个滤光片组232中包含2个全色滤光片233和2个彩色滤光片234,全色滤光片233中包括9个子滤光片2331,彩色滤光片234中包括9个子滤光片2341,则最小重复单元231为12行12列144个子滤光片,如图5所示,排布方式为:
Figure PCTCN2022120545-appb-000003
其中,w表示全色子滤光片,a、b和c均表示彩色子滤光片。
12行12列144个子滤光片结合了quad和RGBW的双重优势。quad的好处是可以局部同像素2乘2 合并、3乘3合并(binning)得到不同分辨率的图像,具有高信噪比。quad全尺寸输出则具有高像素,得到全尺寸全分辨率的图像,清晰度更高。RGBW的好处是,利用W像素提高图像整体的进光量,进而提升画质信噪比。
在一个实施例中,如图8所示,提供了一种对焦控制方法,应用于如上述实施例中的图像传感器,图像传感器包括像素阵列及滤光片阵列,该方法包括:
操作820,根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同。
在不同拍摄场景或不同时刻,当前拍摄场景的光线强度均不尽相同,而由于RGB像素阵列在不同的光线强度下的感光度不同,因此,在部分光线强度下,通过RGB像素阵列所计算出的相位差的准确性较低,进而导致对焦的准确性也大幅降低。其中,光线强度又称之为光照强度,光照强度是一种物理术语,指单位面积上所接受可见光的光通量,简称照度,单位勒克斯(Lux或lx)。光照强度用于指示光照的强弱和物体表面积被照明程度的量。下表为不同天气及位置下的光照强度值:
表1-1
天气及位置 光照强度值
晴天阳光直射地面 100000lx
晴天室内中央 200lx
阴天室外 50-500lx
阴天室内 5-50lx
月光(满月) 2500lx
晴朗月夜 0.2lx
黑夜 0.0011lx
从上述表1-1中可知,在拍摄场景或不同时刻,当前拍摄场景的光线强度相差较大。
为了解决在部分光线强度下,通过RGB像素阵列所计算出的相位差的准确性较低,进而导致对焦的准确性也大幅降低这个问题,在当前拍摄场景不同的光线强度下,确定与当前拍摄场景的光线强度适配的相位信息输出模式,再分别采用不同的相位信息输出模式来输出像素阵列的相位信息。相位信息输出模式指的是基于像素阵列的原始相位信息,对原始相位信息进行处理以生成最终所输出的该像素阵列的相位信息的模式。
其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同。即在当前拍摄场景不同的光线强度下,同一像素阵列所输出的相位阵列的大小不同。换言之,在当前拍摄场景不同的光线强度下,将同一像素阵列对应的相位信息直接输出作为该像素阵列对应的相位阵列或进行一定程度地合并,生成该像素阵列对应的相位阵列。例如,若当前拍摄场景的光线强度较大,则可以将同一像素阵列对应的相位信息直接输出作为该像素阵列对应的相位阵列。此时所输出的相位阵列的大小等于该像素阵列的尺寸。若当前拍摄场景的光线强度较小,则可以将同一像素阵列对应的相位信息进行一定程度地合并,生成该像素阵列对应的相位阵列。此时所输出的相位阵列的大小小于该像素阵列的尺寸。
由于不同尺寸的相位阵列的信噪比是不同的,因此,可以提高在不同的光线强度下所输出的相位信息的准确性,进而提高对焦的准确性。
操作840,按照相位信息输出模式,输出与像素阵列对应的相位阵列;其中,相位阵列包括像素阵列中目标像素对应的相位信息。
在根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式之后,就可以按照相位信息输出模式,输出与像素阵列对应的相位信息。具体的,在输出与像素阵列对应的相位信息时,可以以相位阵列的形式进行输出。其中,相位阵列包括像素阵列对应的相位信息。
具体的,在当前拍摄场景不同的光线强度下,按照与该光线强度适配的相位信息输出模式,将同一像素阵列对应的相位信息直接输出作为该像素阵列对应的相位阵列或进行一定程度地合并,生成该像素阵列对应的相位阵列,本申请对此不做限定。
操作860,基于相位阵列计算像素阵列的相位差,并根据相位差进行对焦控制。
在按照相位信息输出模式,输出与像素阵列对应的相位阵列之后,可以基于相位阵列中的相位信息计算像素阵列的相位差。假设可以获取像素阵列在第二方向的相位阵列,则基于第二方向上相邻的两个相位信息计算相位差,最终得到整个像素阵列在第二方向的相位差。假设可以获取像素阵列在第一方向的相位阵列,则基于第一方向上相邻的两个相位信息计算相位差,最终得到整个像素阵列在第一方向的相位差,且第二方向与第一方向不同。其中,第二方向可以为像素阵列的竖直方向,第一方向可以为像素阵列的水平方向,且第二方向与第一方向相互垂直。当然,可以同时获取整个像素阵列在第二方向及第一方向的相位差,还可以计算像素阵列在其他方向上的相位差,例如对角线方向(包括第一对角线方向,及与第一对角线方向垂直的第二对角线方向)等,本申请对此不做限定。
在基于所计算的相位差进行对焦控制时,由于针对当前拍摄场景对应的预览图像上某一方向的纹理特征,所采集到的平行于该方向的相位差几乎为0,显然不能基于所采集的平行于该方向的相位差进行对焦。因此,若当前拍摄场景对应的预览图像中包括第一方向的纹理特征,则基于像素阵列在第二方向的相位阵列计算像素阵列在第二方向的相位差。根据像素阵列在第二方向的相位差进行对焦控制。
例如,假设第二方向为像素阵列的竖直方向,第一方向为像素阵列的水平方向,且第二方向与第一方向相互垂直。那么,预览图像中包括第一方向的纹理特征,指的是预览图像中包括水平方向的条纹,可以是纯色的、水平方向的条纹。此时,当前拍摄场景对应的预览图像中包括水平方向的纹理特征,则基于竖直方向的相位差进行对焦控制。
若当前拍摄场景对应的预览图像中包括第二方向的纹理特征,则基于第一方向的相位差进行对焦控制。若当前拍摄场景对应的预览图像中包括第一对角线方向的纹理特征,则基于第二对角线方向的相位差进行对焦控制,反之同理。如此,针对不同方向的纹理特征,才能够准确地采集到相位差,进而准确地对焦。
本申请实施例中,根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同。按照相位信息输出模式,输出与像素阵列对应的相位阵列;其中,相位阵列包括像素阵列对应的相位信息。基于相位阵列计算像素阵列的相位差,并根据相位差进行对焦控制。
在当前拍摄场景的不同光线强度下,所能够采集到的原始相位信息的准确性不同。因此,可以根据当前拍摄场景的光线强度,针对同一像素阵列采用不同的相位信息输出模式,基于原始相位信息输出不同尺寸的相位阵列。由于不同尺寸的相位阵列的信噪比是不同的,因此,可以提高在不同的光线强度下所输出的相位信息的准确性,进而提高对焦控制的准确性。
在前一个实施例中,描述了根据当前拍摄场景的光线强度,针对同一像素阵列采用不同的相位信息输出模式,输出与该像素阵列对应的相位阵列,并基于相位阵列计算像素阵列的相位差,并根据相位差进行对焦控制。本实施例中,详细说明操作820,根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式的具体实现操作,包括:
确定当前拍摄场景的光线强度所属的目标光线强度范围;其中,不同的光线强度范围对应不同的相位信息输出模式;
根据目标光线强度范围,确定与当前拍摄场景的光线强度适配的相位信息输出模式。
具体的,可以基于光线强度的不同预设阈值,将光线强度按照大小顺序划分为不同的光线强度范围。其中,可以根据曝光参数及像素阵列中像素的尺寸来确定光线强度的预设阈值。其中,曝光参数包括快门速度、镜头光圈大小及感光度(ISO,light sensibility ordinance)。
然后,为不同的光线强度范围设置不同的相位信息输出模式。具体的,按照光线强度范围中光线强度的大小顺序,为不同的光线强度范围所设置的相位信息输出模式所输出的相位阵列的大小依次减小。
判断当前拍摄场景的光线强度落入哪个光线强度范围,则将光线强度范围作为当前拍摄场景的光线强度所属的目标光线强度范围。将该目标光线强度范围对应的相位信息输出模式,作为与当前拍摄场景的光线强度适配的相位信息输出模式。
本申请实施例中,在根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式时,由于不同的光线强度范围对应不同的相位信息输出模式,因此,先确定当前拍摄场景的 光线强度所属的目标光线强度范围。再根据目标光线强度范围,确定与当前拍摄场景的光线强度适配的相位信息输出模式。预先对不同的光线强度范围分别设置不同的相位信息输出模式,且每种相位信息输出模式下所输出的相位阵列的大小不同。因此,就可以基于当前拍摄场景的光线强度,对像素阵列进行更加精细化地计算相位信息,以实现更加准确地对焦。
接前一个实施例,进一步描述了相位信息输出模式包括全尺寸输出模式及第一尺寸输出模式,且全尺寸输出模式下的相位阵列的大小大于第一尺寸输出模式下相位阵列的大小。那么,如图9所示,操作824,根据目标光线强度范围,确定与当前拍摄场景的光线强度适配的相位信息输出模式,包括:
操作824a,若当前拍摄场景的光线强度大于第一预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为全尺寸输出模式;
操作824b,若当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为第一尺寸输出模式。
具体的,若其中一个光线强度范围为大于第一预设阈值的范围,该光线强度范围对应的相位信息输出模式为全尺寸输出模式。则若判断当前拍摄场景的光线强度大于第一预设阈值,则当前拍摄场景的光线强度落入了该光线强度范围。即确定与当前拍摄场景的光线强度适配的相位信息输出模式为全尺寸输出模式。其中,采用全尺寸输出模式输出相位阵列,即为将像素阵列的原始相位信息全部输出,生成该像素阵列的相位阵列。
若其中一个光线强度范围为大于第二预设阈值,且小于或等于第一预设阈值的范围,该光线强度范围对应的相位信息输出模式为第一尺寸输出模式。则若判断当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值,则当前拍摄场景的光线强度落入了该光线强度范围。即确定与当前拍摄场景的光线强度适配的相位信息输出模式为第一尺寸输出模式。其中,采用第一尺寸输出模式输出相位阵列,即为将像素阵列的原始相位信息进行合并后输出,生成该像素阵列的相位阵列。
本申请实施例中,由于全尺寸输出模式下的相位阵列的大小大于第一尺寸输出模式下相位阵列的大小,则若当前拍摄场景的光线强度大于第一预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为全尺寸输出模式。若当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为第一尺寸输出模式。即若当前拍摄场景的光线强度较大,则采用全尺寸输出模式输出与像素阵列尺寸相同的相位阵列,而若当前拍摄场景的光线强度次之,则采用第一尺寸输出模式输出比像素阵列尺寸较小的相位阵列。即在当前拍摄场景的光线强度次之的情况下,通过缩小相位阵列来提高相位信息的信噪比。
在前一个实施例中,描述了像素阵列可以是RGBW像素阵列,包括多个最小重复单元241,最小重复单元241包括多个像素组242,多个像素组242包括全色像素组243和彩色像素组244。每个全色像素组243中包括9个全色像素2431,每个彩色像素组244中包括9个彩色像素2441。每个全色像素2431包括阵列排布的2个子像素,每个彩色像素2441包括阵列排布的2个子像素。
本实施例中,在像素阵列是RGBW像素阵列的情况下,如图10所示,详细说明若相位信息输出模式为全尺寸输出模式,则按照相位信息输出模式,输出与像素阵列对应的相位阵列的具体实现操作,包括:
操作1020,将像素阵列中的彩色像素组作为目标像素组;
操作1040,针对各目标像素组,获取目标像素组中每个像素的子像素的相位信息;
操作1060,根据目标像素的子像素的相位信息,生成与像素阵列对应的全尺寸相位阵列;全尺寸相位阵列的大小为阵列排布的6×3个像素的大小。
本实施例为在当前拍摄场景的光线强度大于第一预设阈值的情况下,按照全尺寸输出模式,输出与像素阵列对应的相位阵列的具体实现操作。其中,第一预设阈值可以为2000lux,本申请对此不做限定。即此时处于光线强度大于2000lux的环境中。结合图11所示,其中,首先从像素阵列中确定彩色像素组作为用于计算相位信息的目标像素组。因为在当前拍摄场景的光线强度大于第一预设阈值的情况下,即在光线充足的场景下,由于全色像素灵敏度高,在光线充足的场景下容易饱和,而饱和后将得不到正确的相位信息,所以此时可以使用彩色像素组的相位信息来实现相位对焦(PDAF)。具体的,可以使 用像素阵列中的部分彩色像素组的相位信息来实现相位对焦,还可以是使用部分像素组中的部分像素来实现相位对焦,本申请对此不做限定。由于此时只使用彩色像素组的相位信息来进行相位对焦,所以减小了所输出的相位信息的数据量,进而提高了相位对焦的效率。
其次,针对各目标像素组,获取目标像素组中每个目标像素的子像素的相位信息。其中,每个像素包括阵列排布的至少两个子像素,每个子像素对应一个感光元件。假设此时每个像素包括阵列排布的两个子像素,这两个子像素可以采用上下排布,也可以采用左右排布,本申请对此不做限定。在本申请实施例中,选择采用左右排布的两个子像素来进行说明,那么,针对各目标像素组,获取该目标像素组中每个目标像素的子像素的相位信息,即从每个目标像素中获取左右排布的两个子像素的相位信息。最后,将所有的目标像素的相位信息输出,作为与像素阵列对应的全尺寸相位阵列。
结合图11所示,一个像素阵列可以包括2个红色像素组、4个绿色像素组2个蓝色像素组以及8个全色像素组。这里,结合图7所示,可以假设a表示绿色,b表示红色,c表示蓝色,w表示全色。并假设将该像素阵列中的所有彩色像素组都作为目标像素组,则针对该像素阵列中所包括的2个红色像素组、4个绿色像素组及2个蓝色像素组,依次计算各像素组的相位信息。例如,针对红色像素组244计算该红色像素组的相位信息,红色像素组包括按照3×3阵列排布的9个红色像素,依次编号为红色像素1、红色像素2、红色像素3、红色像素4、红色像素5、红色像素6、红色像素7、红色像素8、红色像素9。其中,每个像素包括左右排布的两个子像素,每个子像素对应一个感光元件。即红色像素1包括左右排布的两个子像素,这两个子像素的相位信息分别为L1、R1;红色像素2包括左右排布的两个子像素,这两个子像素的相位信息分别为L2、R2;红色像素3包括左右排布的两个子像素L3、R3,这两个子像素的相位信息分别为;红色像素4包括左右排布的两个子像素,这两个子像素的相位信息分别为L4、R4;红色像素5包括左右排布的两个子像素,这两个子像素的相位信息分别为L5、R5;红色像素6包括左右排布的两个子像素,这两个子像素的相位信息分别为L6、R6;红色像素7包括左右排布的两个子像素,这两个子像素的相位信息分别为L7、R7;红色像素8包括左右排布的两个子像素,这两个子像素的相位信息分别为L8、R8;红色像素9包括左右排布的两个子像素,这两个子像素的相位信息分别为L9、R9。
最后,将所有的目标像素的相位信息输出,作为与该红色像素阵列对应的全尺寸相位阵列。即由L1、R1,L2、R2,L3、R3,L4、R4,L5、R5,L6、R6,L7、R7,L8、R8,L9、R9,依次排列生成了全尺寸相位阵列。且该全尺寸相位阵列的大小相当于阵列排布的6×3个像素的大小。这里,像素的大小指的是一个像素的面积大小,该面积大小与像素的长与宽相关。
其中,像素是数码相机感光器件(CCD或CMOS)上的最小感光单位。其中,CCD是电荷耦合器件(charge coupled device)的简称,CMOS(Complementary Metal-Oxide-Semiconductor),其可以解释为互补金属氧化物半导体。一般情况下,像素没有固定的大小,像素的大小与显示屏的尺寸以及分辨率相关。例如,显示屏的尺寸为4.5英寸,且显示屏的分辨率为1280×720,则显示屏的长为99.6mm,宽为56mm,则一个像素的长为99.6mm/1280=0.0778mm,宽也为56mm/720=0.0778mm。在这个例子中,阵列排布的6×3个像素的大小为:长为6×0.0778mm,宽为3×0.0778mm。当然,本申请对此不做限定。那么,该全尺寸相位阵列的大小的长为6×0.0778mm,宽为3×0.0778mm。当然,在其他实施例中,像素也可以不是长宽相等的矩形,像素还可以其他异性结构,本申请对此不做限定。
同理,针对像素阵列中的其他彩色像素组,也是采用上述方法生成了各自的全尺寸相位阵列。基于所有的全尺寸相位阵列,就得到了该像素阵列的相位信息。
此时,可以将相位阵列输入ISP(Image Signal Processing),通过ISP基于相位阵列计算像素阵列的相位差。然后,基于相位差计算出离焦距离,并计算出于该离焦距离对应的DAC code值。最后,通过马达(VCM)的driver IC将code值转换为驱动电流,并由马达驱动镜头移动到清晰位置。从而,根据相位差实现了对焦控制。
本申请实施例中,在当前拍摄场景的光线强度大于第一预设阈值的情况下,按照全尺寸输出模式,输出与像素阵列对应的相位阵列。将像素阵列中的彩色像素组作为目标像素组,针对各目标像素组,获取目标像素组中每个像素的子像素的相位信息。最后,根据目标像素的子像素的相位信息,生成与像素 阵列对应的全尺寸相位阵列。由于此时只使用彩色像素组的相位信息来进行相位对焦,所以减小了所输出的相位信息的数据量,进而提高了相位对焦的效率。
在一个实施例中,在目标像素组中的每个目标像素对应的至少两个感光元件沿第一方向排列的情况下,如图12所示,详细说明若相位信息输出模式为第一尺寸输出模式,则按照相位信息输出模式,输出与像素阵列对应的相位阵列的具体实现操作,包括:
操作1220,将像素阵列中的彩色像素组、全色像素组中的至少一种作为目标像素组;
本实施例为在当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值的情况下,按照第一尺寸输出模式,输出与像素阵列对应的相位阵列的具体实现操作。其中,第二预设阈值可以为500lux,本申请对此不做限定。即此时处于光线强度大于500lux,且小于或等于2000lux的环境中。结合图12所示,其中,首先从像素阵列中确定彩色像素组、全色像素组中的至少一种作为用于计算相位信息的目标像素组。因为在当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值的情况下,即在光线稍弱的场景下,全色像素在光线稍弱的场景下不容易饱和,所以此时可以使用全色像素组的相位信息来实现相位对焦(PDAF)。且在光线稍弱的场景下,彩色像素在光线稍弱的场景下也能够获取到准确的相位信息,所以此时也可以使用彩色像素组的相位信息来实现相位对焦(PDAF)。因此,在按照第一尺寸输出模式,输出与像素阵列对应的相位阵列时,可以选择彩色像素组作为目标像素组,也可以选择全色像素组作为目标像素组,还可以选择彩色像素组及全色像素组作为目标像素组,本申请对此不做限定。
具体的,针对目标像素组为彩色像素组的情况,可以使用像素阵列中的部分彩色像素组的相位信息来实现相位对焦,还可以是使用部分彩色像素组中的部分彩色像素来实现相位对焦,本申请对此不做限定。同理,针对目标像素组为全色像素组的情况,可以使用像素阵列中的部分全色像素组的相位信息来实现相位对焦,还可以是使用部分全色像素组中的部分全色像素来实现相位对焦,本申请对此不做限定。同理,针对目标像素组为彩色像素组及全色像素组的情况,可以使用像素阵列中的部分全色像素组、部分彩色像素组的相位信息来实现相位对焦,还可以是使用部分全色像素组中的部分全色像素、部分彩色像素组中的部分彩色像素来实现相位对焦,本申请对此不做限定。
由于此时可以只使用部分像素组的相位信息来进行相位对焦,或只使用了部分像素组中的部分像素的相位信息进行相位对焦,所以减小了所输出的相位信息的数据量,进而提高了相位对焦的效率。
操作1240,针对每个目标像素组,沿第二方向以一个子像素作为滑窗步长获取多组相邻的两个子像素的相位信息;其中,第二方向与第一方向相互垂直;
操作1260,将多组相邻的两个子像素的相位信息进行合并,生成多组第一合并相位信息;
其次,针对各目标像素组,获取目标像素组中每个目标像素的子像素的相位信息。其中,每个像素包括阵列排布的至少两个子像素,每个子像素对应一个感光元件。假设此时每个像素包括阵列排布的两个子像素,这两个子像素可以采用上下排布,也可以采用左右排布,本申请对此不做限定。在本申请实施例中,选择采用左右排布的两个子像素来进行说明,那么,针对各目标像素组,沿第二方向以一个子像素作为滑窗步长获取相邻的两个子像素的相位信息。再将相邻的两个子像素的相位信息进行合并,生成第一合并相位信息。例如,若目标像素组为像素阵列中的全色像素组,则沿第二方向以一个子像素作为滑窗步长获取相邻的两个子像素的相位信息,可以获取到12对相邻的两个子像素的相位信息。再将这12组相邻的两个子像素的相位信息分别进行合并,生成12个第一合并相位信息。
结合图13所示,一个像素阵列可以包括2个红色像素组、4个绿色像素组、2个蓝色像素组以及8个全色像素组。假设将该像素阵列中的所有全色像素组都作为目标像素组,则针对该像素阵列中所包括的8个全色像素组,依次计算各像素组的相位信息。例如,针对全色像素组计算该全色像素组的相位信息,全色像素组包括按照3×3阵列排布的9个全色像素,依次编号为全色像素1、全色像素2、全色像素3、全色像素4、全色像素5、全色像素6、全色像素7、全色像素8、全色像素9。其中,每个像素包括左右排布的两个子像素,每个子像素对应一个感光元件。即全色像素1包括左右排布的两个子像素,这两个子像素的相位信息分别为L1、R1;全色像素2包括左右排布的两个子像素,这两个子像素的相位信息分别为L2、R2;全色像素3包括左右排布的两个子像素L3、R3,这两个子像素的相位信息分 别为;全色像素4包括左右排布的两个子像素,这两个子像素的相位信息分别为L4、R4;全色像素5包括左右排布的两个子像素,这两个子像素的相位信息分别为L5、R5;全色像素6包括左右排布的两个子像素,这两个子像素的相位信息分别为L6、R6;全色像素7包括左右排布的两个子像素,这两个子像素的相位信息分别为L7、R7;全色像素8包括左右排布的两个子像素,这两个子像素的相位信息分别为L8、R8;全色像素9包括左右排布的两个子像素,这两个子像素的相位信息分别为L9、R9。
最后,沿第二方向以一个子像素作为滑窗步长获取12组相邻的两个子像素的相位信息。该12组相邻的两个子像素的相位信息分别为:L1和L4、L2和L5、L3和L6、L4和L7、L5和L8、L6和L9;R1和R4、R2和R5、R3和R6、R4和R7、R5和R8、R6和R9。
然后,将这12组相邻的两个子像素的相位信息分别进行合并,生成12个第一合并相位信息。例如,将L1和L4合并生成第一合并相位信息L 1、L2和L5合并生成第一合并相位信息L 2、L3和L6合并生成第一合并相位信息L 3、L4和L7合并生成第一合并相位信息L 4、L5和L8合并生成第一合并相位信息L 5、L6和L9合并生成第一合并相位信息L 6;将R1和R4合并生成第一合并相位信息R 1、R2和R5合并生成第一合并相位信息R 2、R3和R6合并生成第一合并相位信息R 3、R4和R7合并生成第一合并相位信息R 4、R5和R8合并生成第一合并相位信息R 5、R6和R9合并生成第一合并相位信息R 6
操作1280,根据多组第一合并相位信息,生成与像素阵列对应的第一尺寸相位阵列,第一尺寸相位阵列的大小为阵列排布的4×2个像素的大小。
在将相邻的两个子像素的相位信息进行合并,生成第一合并相位信息之后,就可以根据第一合并相位信息,生成与像素阵列对应的第一尺寸相位阵列。具体的,若目标像素组为像素阵列中的全色像素组,则将相邻的两个子像素的相位信息进行合并,生成了12组第一合并相位信息。那么,可以是直接将该12组第一合并相位信息输出,作为与该全色像素阵列对应的第一尺寸相位阵列。即由L 1、R 1、L 2、R 2、L 3、R 3、L 4、R 4、L 5、R 5、L 6、R 6,依次排列生成了第一尺寸相位阵列。且该第一尺寸相位阵列的大小相当于阵列排布的6×2个像素的大小。
当然,还可以是对12组第一合并相位信息进行合并处理或变换处理,生成与像素阵列对应的第一尺寸相位阵列。这里的变换处理,可以是对12组第一合并相位信息进行校正等处理,本申请对此不做限定。在对12组第一合并相位信息进行合并处理时,可以将相当于6×2个像素大小的相位阵列,合并为4×2个像素大小的相位阵列。当然,本申请并不对合并后的相位阵列的具体大小做限定。这里,像素的大小指的是一个像素的面积大小,该面积大小与像素的长与宽相关。
其中,像素是数码相机感光器件(CCD或CMOS)上的最小感光单位。一般情况下,像素没有固定的大小,像素的大小与显示屏的尺寸以及分辨率相关。例如,显示屏的尺寸为4.5英寸,且显示屏的分辨率为1280×720,则显示屏的长为99.6mm,宽为56mm,则一个像素的长为99.6mm/1280=0.0778mm,宽也为56mm/720=0.0778mm。在这个例子中,阵列排布的4×2个像素的大小为:长为4×0.0778mm,宽为2×0.0778mm。当然,本申请对此不做限定。那么,该全尺寸相位阵列的大小的长为4×0.0778mm,宽为2×0.0778mm。当然,在其他实施例中,像素也可以不是长宽相等的矩形,像素还可以其他异性结构,本申请对此不做限定。
同理,针对像素阵列中的其他全色像素组,也是采用上述方法生成了各自的第一尺寸相位阵列。基于所有的第一尺寸相位阵列,就得到了该像素阵列的相位信息。
此时,可以将相位阵列输入ISP,通过ISP基于相位阵列计算像素阵列的相位差。然后,基于相位差计算出离焦距离,并计算出于该离焦距离对应的DAC code值。最后,通过马达(VCM)的driver IC将code值转换为驱动电流,并由马达驱动镜头移动到清晰位置。从而,根据相位差实现了对焦控制。
本申请实施例中,在当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值的情况下,因为此时的光线强度稍弱,则通过彩色像素组或全色像素组所采集到的相位信息不是很准确,部分彩色像素组或部分全色像素组可能未采集到相位信息。因此,将像素阵列中的彩色像素组或全色像素组中的至少一种作为目标像素组,且针对各目标像素组,采用第一尺寸输出模式将子像素的相位信息进行一定程度的合并,提高所输出的相位信息的准确性,提高相位信息的信噪比。最终,基于与像素阵列对应的第一尺寸相位阵列进行相位对焦,就可以提高对焦的准确性。
在图12所示实施例的基础上,在一个实施例中,操作1280,根据多组第一合并相位信息,生成与像素阵列对应的第一尺寸相位阵列,包括:
将多组相邻的两个第一合并相位信息再次进行合并,生成目标相位信息;其中,用于生成相邻的两个第一合并相位信息的子像素在目标像素中处于同一位置;
根据目标相位信息,生成像素阵列在第一方向的第一尺寸相位阵列。
结合图13所示,针对全色像素组,从第一合并相位信息中确定相邻的两个第一合并相位信息。具体的,判断用于生成第一合并相位信息的子像素在像素中是否处于同一位置;若是,则确定这两个第一合并相位信息为相邻的两个第一合并相位信息。将多组相邻的两个第一合并相位信息再次进行合并,生成目标相位信息。将目标相位信息输出,就生成了像素阵列在第一方向的第一尺寸相位阵列。
具体的,从第一合并相位信息L 1、第一合并相位信息L 2、第一合并相位信息L 3、第一合并相位信息L 4、第一合并相位信息L 5、第一合并相位信息L 6中,确定相邻的两个第一合并相位信息。其中,用于生成第一合并相位信息L 1的子像素为全色像素1及全色像素4的左半部分的子像素,用于生成第一合并相位信息L 2的子像素为全色像素2及全色像素5的左半部分的子像素。且判断出色像素1及全色像素4的左半部分的子像素、全色像素2及全色像素5的左半部分的子像素在各自像素中处于同一位置(均处于左侧)。因此,确定第一合并相位信息L 1及第一合并相位信息L 2为相邻的两个第一合并相位信息。同理,确定第一合并相位信息L 2及第一合并相位信息L 3、第一合并相位信息L 4及第一合并相位信息L 5、第一合并相位信息L 5及第一合并相位信息L 6均为相邻的两个第一合并相位信息。同理,确定第一合并相位信息R 1及第一合并相位信息R 2、第一合并相位信息R 2及第一合并相位信息R 3、第一合并相位信息4 4及第一合并相位信息R 5、第一合并相位信息R 5及第一合并相位信息R 6均为相邻的两个第一合并相位信息。
将相邻的两个第一合并相位信息再次进行合并,生成目标相位信息。即将第一合并相位信息L 1及第一合并相位信息L 2再次进行合并,生成目标相位信息。同理,将第一合并相位信息L 2及第一合并相位信息L 3再次进行合并,生成目标相位信息;将第一合并相位信息L 4及第一合并相位信息L 5再次进行合并,生成目标相位信息;将第一合并相位信息L 5及第一合并相位信息L 6再次进行合并,生成目标相位信息。同理,将第一合并相位信息R 1及第一合并相位信息R 2再次进行合并,生成目标相位信息;将第一合并相位信息R 2及第一合并相位信息R 3再次进行合并,生成目标相位信息;将第一合并相位信息R 4及第一合并相位信息R 5再次进行合并,生成目标相位信息;将第一合并相位信息R 5及第一合并相位信息R 6再次进行合并,生成目标相位信息。
将所有的目标相位信息输出,就生成了像素阵列在第一方向的第一尺寸相位阵列。
本申请实施例中,在当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值的情况下,因为此时的光线强度稍弱,则通过彩色像素组或全色像素组所采集到的相位信息不是很准确,部分彩色像素组或部分全色像素组可能未采集到相位信息。因此,将像素阵列中的彩色像素组或全色像素组中的至少一种作为目标像素组,且针对各目标像素组,采用第一尺寸输出模式将子像素的相位信息进行两次合并,提高所输出的相位信息的准确性,提高相位信息的信噪比。最终,基于与像素阵列对应的第一尺寸相位阵列进行相位对焦,就可以提高对焦的准确性。
接前一个实施例,若目标像素组包括彩色像素组及全色像素组,则根据目标相位信息,生成像素阵列在第一方向的第一尺寸相位阵列,包括:
根据当前拍摄场景的光线强度确定彩色像素组对应的第一相位权重以及全色像素组对应的第二相位权重;其中,彩色像素组在不同的光线强度下所对应的第一相位权重不同,全色像素组在不同的光线强度下所对应的第二相位权重不同;
基于彩色像素组的目标相位信息及第一相位权重、全色像素组的目标相位信息及第二相位权重,生成像素阵列在第一方向的第一尺寸相位阵列。
具体的,在当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值的场景下,且所确定的目标像素组包括彩色像素组及全色像素组,那么,基于目标像素组的相位信息生成第一尺寸相位阵列时,可以考虑不同像素组之间的权重。其中,可以根据当前拍摄场景的光线强度确定彩色像素组 对应的第一相位权重以及全色像素组对应的第二相位权重。具体的,当前拍摄场景的光线强度越接近第二预设阈值,则此时彩色像素组对应的第一相位权重越小,而全色像素组对应的第二相位权重越大。因为此时光线强度在大于第二预设阈值,且小于或等于第一预设阈值的场景下偏小,全色像素组对应的第二相位权重越大,则所获取到的相位信息越准确。随着光线强度增大,当前拍摄场景的光线强度越接近第一预设阈值,则此时彩色像素组对应的第一相位权重越大,而全色像素组对应的第二相位权重越小。因为此时光线强度在大于第二预设阈值,且小于或等于第一预设阈值的场景下偏大,彩色像素组对应的第一相位权重越大,则所获取到的相位信息越全面、越准确。其中,彩色像素组在不同的光线强度下所对应的第一相位权重不同,全色像素组在不同的光线强度下所对应的第二相位权重不同。例如,当前拍摄场景的光线强度为2000lux时,确定彩色像素组对应的第一相位权重为40%,其中,绿色像素组的相位权重为20%,红色像素组的相位权重为10%,蓝色像素组的相位权重为10%。并确定全色像素组对应的第二相位权重为60%,本申请对此不做限定。
然后,就可以基于彩色像素组的目标相位信息及第一相位权重、全色像素组的目标相位信息及第二相位权重,生成像素阵列在第一方向的第一尺寸相位阵列。例如,针对该像素阵列,基于第一个红色像素组的目标相位信息及相位权重10%、第二个红色像素组的目标相位信息及相位权重10%,第一个蓝色像素组的目标相位信息及相位权重10%、第二个蓝色像素组的目标相位信息及相位权重10%,以及各个绿色像素组的目标相位信息及相位权重20%,还有各个全色像素组的目标相位信息及相位权重60%,共同求和计算出像素阵列在第一方向的相位信息,即得到了第一尺寸相位阵列。
本申请实施例中,在进行相位对焦时,若确定目标像素组包括彩色像素组及全色像素组,则可以基于彩色像素组的目标相位信息及其第一相位权重、全色像素组的目标相位信息及其第二相位权重,生成像素阵列的第一尺寸相位阵列。如此,基于彩色像素组及全色像素组的目标相位信息,共同生成像素阵列的第一尺寸相位阵列,可以提高相位信息的全面性。同时,在不同的光线强度下彩色像素组及全色像素组的目标相位信息的相位权重不同,如此,能够在不同的光线强度下通过调节权重大小来提高相位信息的准确性。
在前述实施例中,描述了像素阵列可以是RGBW像素阵列,包括多个最小重复单元241,最小重复单元241包括多个像素组242,多个像素组242包括全色像素组243和彩色像素组244。每个全色像素组243中包括9个全色像素2431,每个彩色像素组244中包括9个彩色像素2441。每个全色像素2431包括阵列排布的2个子像素,每个彩色像素2441包括阵列排布的2个子像素。
本实施例中,在像素阵列是RGBW像素阵列的情况下,相位信息输出模式还包括第二尺寸输出模式及第三尺寸输出模式;其中,第一尺寸输出模式下的相位阵列的大小大于第二尺寸输出模式下相位阵列的大小;第二尺寸输出模式下的相位阵列的大小大于第三尺寸输出模式下相位阵列的大小;
结合图9所示,操作824,根据目标光线强度范围,确定与当前拍摄场景的光线强度适配的相位信息输出模式,包括:
操作824c,若当前拍摄场景的光线强度大于第三预设阈值,且小于或等于第二预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为第二尺寸输出模式;
操作824d,若当前拍摄场景的光线强度小于或等于第三预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为第三尺寸输出模式;第二预设阈值大于第三预设阈值。
具体的,若其中一个光线强度范围为大于第三预设阈值,且小于或等于第二预设阈值的范围,该光线强度范围对应的相位信息输出模式为第二尺寸输出模式。则若判断当前拍摄场景的光线强度大于第三预设阈值,且小于或等于第二预设阈值,则当前拍摄场景的光线强度落入了该光线强度范围。即确定与当前拍摄场景的光线强度适配的相位信息输出模式为第二尺寸输出模式。其中,第三预设阈值可以为50lux,本申请对此不做限定。即此时处于在黄昏或在光线强度大于50lux,且小于或等于500lux的环境中。
其中,采用第二尺寸输出模式输出相位阵列,即为将像素阵列的原始相位信息进行合并后输出,生成该像素阵列的相位阵列。换言之,像素阵列的尺寸大于该像素阵列的相位阵列的大小。例如,若像素阵列的尺寸为12×12,则该像素阵列中各目标像素组的相位阵列的大小是2×1,本申请中并不对该相 位阵列的大小做出限定。
若其中一个光线强度范围为小于或等于第三预设阈值的范围,该光线强度范围对应的相位信息输出模式为第三尺寸输出模式。则若判断当前拍摄场景的光线强度小于或等于第三预设阈值,则当前拍摄场景的光线强度落入了该光线强度范围。即确定与当前拍摄场景的光线强度适配的相位信息输出模式为第三尺寸输出模式。其中,采用第三尺寸输出模式输出相位阵列,即为将像素阵列的原始相位信息进行合并后输出,生成该像素阵列的相位阵列。换言之,像素阵列的尺寸大于该像素阵列的相位阵列的大小。例如,若像素阵列的尺寸为12×12,则该像素阵列的相位阵列的大小是4×2,本申请中并不对该像素阵列的尺寸、相位阵列的大小做出限定。
本申请实施例中,由于第二尺寸输出模式下的相位阵列的大小大于第三尺寸输出模式下相位阵列的大小,则若当前拍摄场景的光线强度大于第三预设阈值,且小于或等于第二预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为第二尺寸输出模式。若当前拍摄场景的光线强度小于或等于第三预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为第三尺寸输出模式。即若当前拍摄场景的光线强度较大,则采用第二尺寸输出模式输出与像素阵列尺寸相同的相位阵列,而若当前拍摄场景的光线强度次之,则采用第三尺寸输出模式输出比像素阵列尺寸较小的相位阵列。即在当前拍摄场景的光线强度次之的情况下,通过缩小相位阵列来提高相位信息的信噪比。
在一个实施例中,目标像素组中的每个目标像素对应的至少两个感光元件沿第一方向排列;若相位信息输出模式为第二尺寸输出模式,如图14所示,则按照相位信息输出模式,输出与像素阵列对应的相位阵列,包括:
操作1420,将像素阵列中的彩色像素组及全色像素组作为目标像素组,或将全色像素组作为目标像素组;
本实施例为在当前拍摄场景的光线强度大于第三预设阈值,且小于或等于第二预设阈值的情况下,按照第二尺寸输出模式,输出与像素阵列对应的相位阵列的具体实现操作。结合图14所示,其中,首先将像素阵列中的彩色像素组及全色像素组作为目标像素组,或将全色像素组作为计算相位信息的目标像素组。因为在当前拍摄场景的光线强度大于第三预设阈值,且小于或等于第二预设阈值的情况下,即在光线更弱的场景下,全色像素能够接收到更多的光线信息,所以此时可以使用全色像素组的相位信息来实现相位对焦(PDAF)。且在光线更弱的场景下,彩色像素在光线更弱的场景下也可以辅助全色像素以获取到准确的相位信息,所以此时也可以使用彩色像素组及全色像素组的相位信息来实现相位对焦(PDAF),也可以仅使用全色像素组的相位信息来实现相位对焦(PDAF)。因此,在按照第二尺寸输出模式,输出与像素阵列对应的相位阵列时,可以选择彩色像素组及全色像素组作为目标像素组,也可以选择全色像素组作为目标像素组本申请对此不做限定。
具体的,针对目标像素组为全色像素组的情况,可以使用像素阵列中的部分全色像素组的相位信息来实现相位对焦,还可以是使用部分全色像素组中的部分全色像素来实现相位对焦,本申请对此不做限定。同理,针对目标像素组为彩色像素组及全色像素组的情况,可以使用像素阵列中的部分全色像素组、部分彩色像素组的相位信息来实现相位对焦,还可以是使用部分全色像素组中的部分全色像素、部分彩色像素组中的部分彩色像素来实现相位对焦,本申请对此不做限定。
由于此时可以只使用部分像素组的相位信息来进行相位对焦,或只使用了部分像素组中的部分像素的相位信息进行相位对焦,所以减小了所输出的相位信息的数据量,进而提高了相位对焦的效率。
操作1440,针对每个目标像素组,沿第二方向获取多组相邻的三个子像素的相位信息;
操作1460,将多组相邻的三个子像素的相位信息进行合并,生成多组第二合并相位信息;
其次,针对各目标像素组,获取目标像素组中每个目标像素的子像素的相位信息。其中,每个像素包括阵列排布的至少两个子像素,每个子像素对应一个感光元件。假设此时每个像素包括阵列排布的两个子像素,这两个子像素可以采用上下排布,也可以采用左右排布,本申请对此不做限定。在本申请实施例中,选择采用左右排布的两个子像素来进行说明,那么,针对各目标像素组,沿第二方向获取多组相邻的三个子像素的相位信息。再将相邻的两个子像素的相位信息进行合并,生成第一合并相位信息。例如,若目标像素组为像素阵列中的全色像素组,则沿第二方向获取多组相邻的三个子像素的相位信息, 可以获取到6组相邻的三个子像素的相位信息。再将这6组相邻的三个子像素的相位信息分别进行合并,生成6组第一合并相位信息。
结合图15所示,一个像素阵列可以包括2个红色像素组、4个绿色像素组、2个蓝色像素组以及8个全色像素组。假设将该像素阵列中的所有全色像素组都作为目标像素组,则针对该像素阵列中所包括的8个全色像素组,依次计算各像素组的相位信息。例如,针对全色像素组计算该全色像素组的相位信息,全色像素组包括按照3×3阵列排布的9个全色像素,依次编号为全色像素1、全色像素2、全色像素3、全色像素4、全色像素5、全色像素6、全色像素7、全色像素8、全色像素9。其中,每个像素包括左右排布的两个子像素,每个子像素对应一个感光元件。即全色像素1包括左右排布的两个子像素,这两个子像素的相位信息分别为L1、R1;全色像素2包括左右排布的两个子像素,这两个子像素的相位信息分别为L2、R2;全色像素3包括左右排布的两个子像素L3、R3,这两个子像素的相位信息分别为;全色像素4包括左右排布的两个子像素,这两个子像素的相位信息分别为L4、R4;全色像素5包括左右排布的两个子像素,这两个子像素的相位信息分别为L5、R5;全色像素6包括左右排布的两个子像素,这两个子像素的相位信息分别为L6、R6;全色像素7包括左右排布的两个子像素,这两个子像素的相位信息分别为L7、R7;全色像素8包括左右排布的两个子像素,这两个子像素的相位信息分别为L8、R8;全色像素9包括左右排布的两个子像素,这两个子像素的相位信息分别为L9、R9。
最后,沿第二方向获取多组相邻的三个子像素的相位信息。该6组相邻的三个子像素的相位信息分别为:L1、L4和L7,L2、L5和L8,L3、L6和L9;R1、R4和R7,R2、R5和R8,R3、R6和R9。
然后,将这6组相邻的三个子像素的相位信息分别进行合并,生成6组第二合并相位信息。例如,将L1、L4和L7合并生成第二合并相位信息L 1,L2、L5和L8合并生成第二合并相位信息L 2,L3、L6和L9合并生成第二合并相位信息L 3。将R1、R4和R7合并生成第二合并相位信息R 1,R2、R5和R8合并生成第二合并相位信息R 2,R3、R6和R9合并生成第二合并相位信息R 3
操作1480,根据多组第二合并相位信息,生成与像素阵列对应的第二尺寸相位阵列,第二尺寸相位阵列的大小为2×1个像素的大小。
在将多组相邻的三个子像素的相位信息进行合并,生成多组第二合并相位信息之后,就可以根据多组第二合并相位信息,生成与像素阵列对应的第二尺寸相位阵列。具体的,若目标像素组为像素阵列中的全色像素组,则将多组相邻的三个子像素的相位信息进行合并,生成了6组第二合并相位信息。那么,可以是直接将该6组第一合并相位信息输出,作为与该全色像素阵列对应的第一尺寸相位阵列。即由L 1、R 1、L 2、R 2、L 3、R 3,依次排列生成了第一尺寸相位阵列。且该第一尺寸相位阵列的大小相当于阵列排布的6×1个像素的大小。
当然,还可以是对6组第二合并相位信息进行合并处理或变换处理,生成与像素阵列对应的第二尺寸相位阵列。这里的变换处理,可以是对6组第二合并相位信息进行校正等处理,本申请对此不做限定。在对6组第二合并相位信息进行合并处理时,可以将相当于6×1个像素大小的相位阵列,合并为2×1个像素大小的相位阵列。当然,本申请并不对合并后的相位阵列的具体大小做限定。这里,像素的大小指的是一个像素的面积大小,该面积大小与像素的长与宽相关。
其中,像素是数码相机感光器件(CCD或CMOS)上的最小感光单位。一般情况下,像素没有固定的大小,像素的大小与显示屏的尺寸以及分辨率相关。例如,显示屏的尺寸为4.5英寸,且显示屏的分辨率为1280×720,则显示屏的长为99.6mm,宽为56mm,则一个像素的长为99.6mm/1280=0.0778mm,宽也为56mm/720=0.0778mm。在这个例子中,阵列排布的2×1个像素的大小为:长为2×0.0778mm,宽为1×0.0778mm。当然,本申请对此不做限定。那么,该全尺寸相位阵列的大小的长为2×0.0778mm,宽为1×0.0778mm。当然,在其他实施例中,像素也可以不是长宽相等的矩形,像素还可以其他异性结构,本申请对此不做限定。
同理,针对像素阵列中的其他全色像素组,也是采用上述方法生成了各自的第二尺寸相位阵列。基于所有的第二尺寸相位阵列,就得到了该像素阵列的相位信息。
此时,可以将相位阵列输入ISP,通过ISP基于相位阵列计算像素阵列的相位差。然后,基于相位差计算出离焦距离,并计算出于该离焦距离对应的DAC code值。最后,通过马达(VCM)的driver IC 将code值转换为驱动电流,并由马达驱动镜头移动到清晰位置。从而,根据相位差实现了对焦控制。
本申请实施例中,在当前拍摄场景的光线强度大于第三预设阈值,且小于或等于第二预设阈值的情况下,因为此时的光线强度更弱,由于全色像素能够接收到更多的光线信息,所以此时可以使用全色像素组的相位信息来实现相位对焦(PDAF)。且在光线更弱的场景下,彩色像素在光线更弱的场景下也可以辅助全色像素以获取到准确的相位信息,所以此时也可以使用彩色像素组及全色像素组的相位信息来实现相位对焦(PDAF),也可以仅使用全色像素组的相位信息来实现相位对焦(PDAF)。因此,将像素阵列中的彩色像素组及全色像素组或全色像素组作为目标像素组,且针对各目标像素组,采用第二尺寸输出模式将子像素的相位信息进行一定程度的合并,提高所输出的相位信息的准确性,提高相位信息的信噪比。最终,基于与像素阵列对应的第二尺寸相位阵列进行相位对焦,就可以提高对焦的准确性。
在一个实施例中,根据第二合并相位信息,生成与像素阵列对应的第二尺寸相位阵列,包括:
将相邻的三个第二合并相位信息再次进行合并,生成目标相位信息;其中,用于生成相邻的三个第二合并相位信息的子像素在目标像素中处于同一位置;
根据目标相位信息,生成像素阵列在第一方向的第二尺寸相位阵列。
结合图15所示,针对全色像素组,从第二合并相位信息中确定相邻的三个第二合并相位信息。具体的,判断用于生成第二合并相位信息的子像素在像素中是否处于同一位置;若是,则确定这三个第二合并相位信息为相邻的三个第二合并相位信息。将相邻的三个第二合并相位信息再次进行合并,生成目标相位信息。将目标相位信息输出,就生成了像素阵列在第一方向的第二尺寸相位阵列。
具体的,从第二合并相位信息L 1、第二合并相位信息L 2、第二合并相位信息L 3、第二合并相位信息R 1、第二合并相位信息R 2、第二合并相位信息R 3中,确定相邻的三个第二合并相位信息。其中,用于生成第二合并相位信息L 1的子像素为全色像素1、全色像素4及全色像素7的左半部分的子像素,用于生成第二合并相位信息L 2的子像素为全色像素2、全色像素5及全色像素8的左半部分的子像素,用于生成第二合并相位信息L 3的子像素为全色像素3、全色像素6及全色像素9的左半部分的子像素。且判断出这些子像素在各自像素中均处于同一位置(均处于左侧)。因此,确定第二合并相位信息L 1、第二合并相位信息L 2及第二合并相位信息L 3为相邻的三个第二合并相位信息。同理,确定第二合并相位信息R 1及第二合并相位信息R 2、第二合并相位信息R 3均为相邻的三个第二合并相位信息。
将相邻的三个第二合并相位信息再次进行合并,生成目标相位信息。即将第二合并相位信息L 1、第二合并相位信息L 2及第二合并相位信息L 3再次进行合并,生成目标相位信息。同理,将第二合并相位信息R 1、第二合并相位信息R 2及第二合并相位信息R 3再次进行合并,生成目标相位信息。
将所有的目标相位信息输出,就生成了像素阵列在第一方向的第二尺寸相位阵列。
本申请实施例中,在当前拍摄场景的光线强度大于第三预设阈值,且小于或等于第二预设阈值的情况下,因为此时的光线强度更弱,则由于全色像素能够接收到更多的光线信息,因此,将像素阵列中的彩色像素组及全色像素组或全色像素组作为目标像素组,且针对各目标像素组,采用第二尺寸输出模式将子像素的相位信息进行两次合并,提高所输出的相位信息的准确性,提高相位信息的信噪比。最终,基于与像素阵列对应的第二尺寸相位阵列进行相位对焦,就可以提高对焦的准确性。
在一个实施例中,若目标像素组包括彩色像素组及全色像素组,则根据目标相位信息,生成像素阵列在第一方向的第二尺寸相位阵列,包括:
根据当前拍摄场景的光线强度确定彩色像素组对应的第三相位权重以及全色像素组对应的第四相位权重;其中,彩色像素组在不同的光线强度下所对应的第三相位权重不同,全色像素组在不同的光线强度下所对应的第四相位权重不同;
基于彩色像素组的目标相位信息及第三相位权重、全色像素组的目标相位信息及第四相位权重,生成像素阵列在第二方向的第二尺寸相位阵列。
具体的,在当前拍摄场景的光线强度大于第三预设阈值,且小于或等于第二预设阈值的场景下,且所确定的目标像素组包括彩色像素组及全色像素组,那么,基于目标像素组的相位信息生成第二尺寸相位阵列时,可以考虑不同像素组之间的权重。其中,可以根据当前拍摄场景的光线强度确定彩色像素组 对应的第三相位权重以及全色像素组对应的第四相位权重。具体的,当前拍摄场景的光线强度越接近第三预设阈值,则此时彩色像素组对应的第三相位权重越小,而全色像素组对应的第四相位权重越大。因为此时光线强度在大于第三预设阈值,且小于或等于第二预设阈值的场景下偏小,全色像素组对应的第四相位权重越大,则所获取到的相位信息越准确。随着光线强度增大,当前拍摄场景的光线强度越接近第二预设阈值,则此时彩色像素组对应的第三相位权重越大,而全色像素组对应的第四相位权重越小。因为此时光线强度在大于第三预设阈值,且小于或等于第二预设阈值的场景下偏大,彩色像素组对应的第三相位权重越大,则所获取到的相位信息越全面、越准确。其中,彩色像素组在不同的光线强度下所对应的第三相位权重不同,全色像素组在不同的光线强度下所对应的第四相位权重不同。例如,当前拍摄场景的光线强度为500lux时,确定彩色像素组对应的第三相位权重为40%,其中,绿色像素组的相位权重为20%,红色像素组的相位权重为10%,蓝色像素组的相位权重为10%。并确定全色像素组对应的第四相位权重为60%,本申请对此不做限定。
然后,就可以基于彩色像素组的目标相位信息及第三相位权重、全色像素组的目标相位信息及第四相位权重,生成像素阵列在第一方向的第二尺寸相位阵列。例如,针对该像素阵列,基于第一个红色像素组的目标相位信息及相位权重10%、第二个红色像素组的目标相位信息及相位权重10%,第一个蓝色像素组的目标相位信息及相位权重10%、第二个蓝色像素组的目标相位信息及相位权重10%,以及各个绿色像素组的目标相位信息及相位权重20%,还有各个全色像素组的目标相位信息及相位权重60%,共同求和计算出像素阵列在第一方向的相位信息,即得到了第二尺寸相位阵列。
本申请实施例中,在进行相位对焦时,若确定目标像素组包括彩色像素组及全色像素组,则可以基于彩色像素组的目标相位信息及其第三相位权重、全色像素组的目标相位信息及其第四相位权重,生成像素阵列的第二尺寸相位阵列。如此,基于彩色像素组及全色像素组的目标相位信息,共同生成像素阵列的第二尺寸相位阵列,可以提高相位信息的全面性。同时,在不同的光线强度下彩色像素组及全色像素组的目标相位信息的相位权重不同,如此,能够在不同的光线强度下通过调节权重大小来提高相位信息的准确性。
在一个实施例中,目标像素组中的每个目标像素对应的至少两个感光元件沿第一方向排列;若当前拍摄场景的光线强度小于或等于第三预设阈值,则按照相位信息输出模式,输出与像素阵列对应的相位阵列,包括:
将像素阵列中沿第二对角线方向相邻的两个全色像素组作为目标像素组;
本实施例为在当前拍摄场景的光线强度小于或等于第三预设阈值的情况下,按照第三尺寸输出模式,输出与像素阵列对应的相位阵列的具体实现操作。其中,此时处于光线较暗的夜晚或光线强度小于或等于50lux的环境中。结合图16所示,其中,首先从像素阵列中确定沿第二对角线方向相邻的两个全色像素组作为用于计算相位信息的目标像素组。因为在当前拍摄场景的光线强度小于或等于第三预设阈值的情况下,即在光线极暗的场景下,全色像素在光线极暗的场景下能够捕捉到更多的光线信息,所以此时可以使用全色像素组的相位信息来实现相位对焦(PDAF)。因此,在按照第三尺寸输出模式,输出与像素阵列对应的相位阵列时,可以选择全色像素组作为目标像素组。
具体的,针对目标像素组为全色像素组的情况,可以使用像素阵列中的部分全色像素组的相位信息来实现相位对焦,还可以是使用部分全色像素组中的部分全色像素来实现相位对焦,本申请对此不做限定。
由于此时可以只使用部分像素组的相位信息来进行相位对焦,或只使用了部分像素组中的部分像素的相位信息进行相位对焦,所以减小了所输出的相位信息的数据量,进而提高了相位对焦的效率。
针对目标像素组中的每个全色像素组,从全色像素组中沿第二方向获取相邻多组的三个子像素的相位信息;其中,第二方向与第一方向相互垂直;
将多组相邻的三个子像素的相位信息进行合并,生成多组第三合并相位信息;
其次,针对各目标像素组,获取目标像素组中每个目标像素的子像素的相位信息。其中,每个像素包括阵列排布的至少两个子像素,每个子像素对应一个感光元件。假设此时每个像素包括阵列排布的两个子像素,这两个子像素可以采用上下排布,也可以采用左右排布,本申请对此不做限定。在本申请实 施例中,选择采用左右排布的两个子像素来进行说明,那么,针对各目标像素组,沿第二方向获取多组相邻的三个子像素的相位信息。再将多组相邻的三个子像素的相位信息进行合并,生成第三合并相位信息。例如,目标像素组为像素阵列中沿第二对角线方向相邻的两个全色像素组,则沿第二方向获取多组相邻的三个子像素的相位信息,可以获取到12组相邻的三个子像素的相位信息。再将这12组相邻的三个子像素的相位信息分别进行合并,生成12组第三合并相位信息。
结合图16所示,一个像素阵列可以包括2个红色像素组、4个绿色像素组、2个蓝色像素组以及8个全色像素组。假设将该像素阵列中的所有全色像素组都作为目标像素组,则针对该像素阵列中所包括的8个全色像素组,依次计算各像素组的相位信息。例如,针对全色像素组计算该全色像素组的相位信息,全色像素组包括按照3×3阵列排布的9个全色像素,依次编号为全色像素1、全色像素2、全色像素3、全色像素4、全色像素5、全色像素6、全色像素7、全色像素8、全色像素9。其中,每个像素包括左右排布的两个子像素,每个子像素对应一个感光元件。即全色像素1包括左右排布的两个子像素,这两个子像素的相位信息分别为L1、R1;全色像素2包括左右排布的两个子像素,这两个子像素的相位信息分别为L2、R2;全色像素3包括左右排布的两个子像素L3、R3,这两个子像素的相位信息分别为;全色像素4包括左右排布的两个子像素,这两个子像素的相位信息分别为L4、R4;全色像素5包括左右排布的两个子像素,这两个子像素的相位信息分别为L5、R5;全色像素6包括左右排布的两个子像素,这两个子像素的相位信息分别为L6、R6;全色像素7包括左右排布的两个子像素,这两个子像素的相位信息分别为L7、R7;全色像素8包括左右排布的两个子像素,这两个子像素的相位信息分别为L8、R8;全色像素9包括左右排布的两个子像素,这两个子像素的相位信息分别为L9、R9。
最后,沿第二方向获取多组相邻的三个子像素的相位信息。该12组相邻的三个子像素的相位信息分别为:第一个全色像素组中的L1、L4和L7,L2、L5和L8,L3、L6和L9;R1、R4和R7,R2、R5和R8,R3、R6和R9;第二个全色像素组中的L1、L4和L7,L2、L5和L8,L3、L6和L9;R1、R4和R7,R2、R5和R8,R3、R6和R9。
然后,将这12组相邻的三个子像素的相位信息分别进行合并,生成12组第三合并相位信息。例如,针对第一个全色像素组,将第一个全色像素组中的将L1、L4和L7合并生成第三合并相位信息L 1,L2、L5和L8合并生成第三合并相位信息L 2,L3、L6和L9合并生成第三合并相位信息L 3。将R1、R4和R7合并生成第三合并相位信息R 1,R2、R5和R8合并生成第三合并相位信息R 2,R3、R6和R9合并生成第三合并相位信息R 3。同理,针对第二个全色像素组,将第一个全色像素组中的将L1、L4和L7合并生成第三合并相位信息L 1,L2、L5和L8合并生成第三合并相位信息L 2,L3、L6和L9合并生成第三合并相位信息L 3。将R1、R4和R7合并生成第三合并相位信息R 1,R2、R5和R8合并生成第三合并相位信息R 2,R3、R6和R9合并生成第三合并相位信息R 3
根据多组第三合并相位信息,生成与像素阵列对应的第三尺寸相位阵列,第三尺寸相位阵列的大小为阵列排布的2×1个像素的大小。
在将相邻的三个子像素的相位信息进行合并,生成第三合并相位信息之后,就可以根据第三合并相位信息,生成与像素阵列对应的第三尺寸相位阵列。具体的,若目标像素组为像素阵列中沿第二对角线方向相邻的两个全色像素组,则将相邻的三个子像素的相位信息进行合并,生成了12组第三合并相位信息。那么,可以是直接将该12组第三合并相位信息输出,作为与该全色像素阵列对应的第三尺寸相位阵列。即由与第一个全色像素组对应的L 1、R 1、L 2、R 2、L 3、R 3,与第二个全色像素组对应的L 1、R 1、L 2、R 2、L 3、R 3,依次排列生成了第三尺寸相位阵列。且该第三尺寸相位阵列的大小相当于阵列排布的6×2个像素的大小。
当然,还可以是对12组第三合并相位信息进行合并处理或变换处理,生成与像素阵列对应的第三尺寸相位阵列。这里的变换处理,可以是对12组第三合并相位信息进行校正等处理,本申请对此不做限定。在对12组第三合并相位信息进行合并处理时,将第一个全色像素组对应的L 1、L 2、L 3,与第二个全色像素组对应的L 1、L 2、L 3进行合并生成左侧的目标相位信息,将第一个全色像素组对应的R 1、R 2、R 3,与第二个全色像素组对应的R 1、R 2、R 3进行合并生成右侧的目标相位信息。即将相当于6×2个像素大小的相位阵列,合并为2×1个像素大小的相位阵列。当然,本申请并不对合并后的相位阵列 的具体大小做限定。这里,像素的大小指的是一个像素的面积大小,该面积大小与像素的长与宽相关。
其中,像素是数码相机感光器件(CCD或CMOS)上的最小感光单位。一般情况下,像素没有固定的大小,像素的大小与显示屏的尺寸以及分辨率相关。例如,显示屏的尺寸为4.5英寸,且显示屏的分辨率为1280×720,则显示屏的长为99.6mm,宽为56mm,则一个像素的长为99.6mm/1280=0.0778mm,宽也为56mm/720=0.0778mm。在这个例子中,阵列排布的2×1个像素的大小为:长为2×0.0778mm,宽为1×0.0778mm。当然,本申请对此不做限定。那么,该全尺寸相位阵列的大小的长为2×0.0778mm,宽为1×0.0778mm。当然,在其他实施例中,像素也可以不是长宽相等的矩形,像素还可以其他异性结构,本申请对此不做限定。
同理,针对像素阵列中的其他全色像素组,也是采用上述方法生成了各自的第三尺寸相位阵列。基于所有的第三尺寸相位阵列,就得到了该像素阵列的相位信息。
此时,可以将相位阵列输入ISP,通过ISP基于相位阵列计算像素阵列的相位差。然后,基于相位差计算出离焦距离,并计算出于该离焦距离对应的DAC code值。最后,通过马达(VCM)的driver IC将code值转换为驱动电流,并由马达驱动镜头移动到清晰位置。从而,根据相位差实现了对焦控制。
本申请实施例中,在当前拍摄场景的光线强度小于或等于第三预设阈值的情况下,因为此时的光线强度极暗,则通过彩色像素组所采集到的相位信息不是很准确,部分彩色像素组可能未采集到相位信息。因此,将像素阵列中沿第二对角线方向相邻的两个全色像素组作为目标像素组,且针对这两个全色像素组,采用第三尺寸输出模式将子像素的相位信息进行合并,提高所输出的相位信息的准确性,提高相位信息的信噪比。最终,基于与像素阵列对应的第三尺寸相位阵列进行相位对焦,就可以提高对焦的准确性。
在一个实施例中,根据第三合并相位信息,生成与像素阵列对应的第三尺寸相位阵列,包括:
将六个第二合并相位信息再次进行合并,生成目标相位信息;其中,用于生成六个第二合并相位信息的子像素在目标像素中处于同一子位置;
根据目标相位信息,生成像素阵列在第一方向的第三尺寸相位阵列。
在对12组第三合并相位信息进行合并处理时,将第一个全色像素组对应的L 1、L 2、L 3,与第二个全色像素组对应的L 1、L 2、L 3进行合并生成左侧的目标相位信息,将第一个全色像素组对应的R 1、R 2、R 3,与第二个全色像素组对应的R 1、R 2、R 3进行合并生成右侧的目标相位信息。即将相当于6×2个像素大小的相位阵列,合并为2×1个像素大小的相位阵列。当然,本申请并不对合并后的相位阵列的具体大小做限定。
将所有的目标相位信息输出,就生成了像素阵列在第一方向的第三尺寸相位阵列。
本申请实施例中,在当前拍摄场景的光线强度小于或等于第三预设阈值的情况下,因为此时的光线强度极暗,则通过彩色像素组所采集到的相位信息不是很准确,部分彩色像素组可能未采集到相位信息。因此,将像素阵列中沿第二对角线方向相邻的两个全色像素组作为目标像素组,且针对这两个全色像素组,采用第三尺寸输出模式将子像素的相位信息进行最大程度地合并,提高所输出的相位信息的准确性,提高相位信息的信噪比。最终,基于与像素阵列对应的第三尺寸相位阵列进行相位对焦,就可以提高对焦的准确性。
在一个实施例中,在按照相位信息输出模式,输出与像素阵列对应的相位阵列之前,还包括:
根据用于对焦控制的像素阵列的预设提取比例及预设提取位置,从图像传感器中的多个像素阵列中确定目标像素阵列;
按照相位信息输出模式,输出与像素阵列对应的相位阵列,包括:
按照相位信息输出模式,输出与目标像素阵列对应的相位阵列。
具体的,图像传感器的面积较大,所包含最小单元的像素阵列数以万计,若从图像传感器中提取所有的相位信息进行相位对焦,则因为相位信息数据量太大,导致实际计算量过大,因此,浪费系统资源、降低图像处理速度。
为了节约系统资源,并提高图像处理速度,可以预先按照预设提取比例及预设提取位置,从图像传感器中的多个像素阵列中提取用于对焦控制的像素阵列。例如,可以按照3%的预设提取比例进行提取, 即从32个像素阵列中提取一个像素阵列作为用于对焦控制的像素阵列。且以所提取的像素阵列作为六边形的顶点进行排布,即所提取的像素阵列构成了六边形。如此,能够均匀地获取到相位信息。当然,本申请并不对预设提取比例及预设提取位置进行限定。
然后,就可以根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式。并针对用于对焦控制的像素阵列,按照相位信息输出模式,输出与该像素阵列对应的相位阵列;其中,相位阵列包括像素阵列中目标像素对应的相位信息。最后,基于相位阵列计算像素阵列的相位差,并根据相位差进行对焦控制。
本申请实施例中,根据用于对焦控制的像素阵列的预设提取比例及预设提取位置,从图像传感器中的多个像素阵列中确定目标像素阵列。如此,不需要采用图像传感器中的所有相位信息进行对焦,而仅仅采用与目标像素阵列对应的相位信息进行对焦,大大减小了数据量,提高了图像处理的速度。同时,按照预设提取位置,从图像传感器中的多个像素阵列中确定目标像素阵列,能够更加均匀地获取到相位信息。最终,提高相位对焦的准确性。
在一个实施例中,提供了一种对焦控制方法,还包括:
根据曝光参数及像素的尺寸确定光线强度的第一预设阈值、第二预设阈值及第三预设阈值。
具体的,在确定光线强度的阈值时,可以根据曝光参数及像素的尺寸来确定。其中,曝光参数包括括快门速度、镜头光圈大小及感光度(ISO,light sensibility ordinance)。
本申请实施例中,根据曝光参数及像素的尺寸确定光线强度的第一预设阈值、第二预设阈值及第三预设阈值,将光线强度范围划分为4个范围,从而,在每个光线强度范围内采用与该光线强度范围对应的相位信息输出模式,从而,实现了更加精细化地计算相位信息。
在一个实施例中,如图17所示,提供了一种对焦控制装置1700,应用于图像传感器,该装置包括:
相位信息输出模式确定模块1720,用于根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同;
相位阵列输出模块1740,用于按照相位信息输出模式,输出与像素阵列对应的相位阵列;其中,相位阵列包括像素阵列中目标像素对应的相位信息;
对焦控制模块1760,用于基于相位阵列计算像素阵列的相位差,并根据相位差进行对焦控制。
在一个实施例中,相位信息输出模式确定模块1720,还用于确定当前拍摄场景的光线强度所属的目标光线强度范围;其中,不同的光线强度范围对应不同的相位信息输出模式;
根据目标光线强度范围,确定与当前拍摄场景的光线强度适配的相位信息输出模式。
在一个实施例中,相位信息输出模式包括全尺寸输出模式及第一尺寸输出模式,全尺寸输出模式下的相位阵列的大小大于第一尺寸输出模式下相位阵列的大小;
相位信息输出模式确定模块1720,还用于若当前拍摄场景的光线强度大于第一预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为全尺寸输出模式;若当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为第一尺寸输出模式;第一预设阈值大于第二预设阈值。
在一个实施例中,如图18所示,若相位信息输出模式为全尺寸输出模式,则相位阵列输出模块1740,包括:
目标像素组确定单元1742,用于将像素阵列中的彩色像素组作为目标像素组;目标像素组中包括目标像素;
相位信息获取单元1744,用于针对各目标像素组,获取目标像素的子像素的相位信息;
全尺寸相位阵列生成单元1746,用于根据目标像素的子像素的相位信息,生成与像素阵列对应的全尺寸相位阵列;全尺寸相位阵列的大小为阵列排布的6×3个像素的大小。
在一个实施例中,目标像素组中的每个目标像素对应的至少两个感光元件沿第一方向排列;若相位信息输出模式为第一尺寸输出模式,则相位阵列输出模块1740,包括:
目标像素组确定单元,还用于将像素阵列中的彩色像素组、全色像素组中的至少一种作为目标像素组;
相位信息获取单元,还用于针对每个目标像素组,沿第二方向以一个子像素作为滑窗步长获取多组相邻的两个子像素的相位信息;其中,第二方向与第一方向相互垂直;
第一尺寸相位阵列生成单元,用于将多组相邻的两个子像素的相位信息进行合并,生成多组第一合并相位信息;根据多组第一合并相位信息,生成与像素阵列对应的第一尺寸相位阵列,第一尺寸相位阵列的大小为阵列排布的4×2个像素的大小。
在一个实施例中,第一尺寸相位阵列生成单元,还用于将相邻的两个第一合并相位信息再次进行合并,生成目标相位信息;其中,用于生成相邻的两个第一合并相位信息的子像素在目标像素中处于同一位置;根据目标相位信息,生成像素阵列在第一方向的第一尺寸相位阵列。
在一个实施例中,若目标像素组包括彩色像素组及全色像素组,则第一尺寸相位阵列生成单元,还用于根据当前拍摄场景的光线强度确定彩色像素组对应的第一相位权重以及全色像素组对应的第二相位权重;其中,彩色像素组在不同的光线强度下所对应的第一相位权重不同,全色像素组在不同的光线强度下所对应的第二相位权重不同;基于彩色像素组的目标相位信息及第一相位权重、全色像素组的目标相位信息及第二相位权重,生成像素阵列在第一方向的第一尺寸相位阵列。
在一个实施例中,相位信息输出模式还包括第二尺寸输出模式及第三尺寸输出模式;其中,第一尺寸输出模式下的相位阵列的大小大于第二尺寸输出模式下相位阵列的大小;第二尺寸输出模式下的相位阵列的大小大于第三尺寸输出模式下相位阵列的大小;
相位信息输出模式确定模块1720,还用于若当前拍摄场景的光线强度大于第三预设阈值,且小于或等于第二预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为第二尺寸输出模式;若当前拍摄场景的光线强度小于或等于第三预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为第三尺寸输出模式;第二预设阈值大于第三预设阈值。
在一个实施例中,目标像素组中的每个目标像素对应的至少两个感光元件沿第一方向排列;若相位信息输出模式为第二尺寸输出模式,则相位阵列输出模块1740,包括:
目标像素组确定单元,还用于将像素阵列中的彩色像素组及全色像素组作为目标像素组,或将全色像素组作为目标像素组;
相位信息获取单元,还用于针对每个目标像素组,沿第二方向获取多组相邻的三个子像素的相位信息;
第二尺寸相位阵列生成单元,用于将多组相邻的三个子像素的相位信息进行合并,生成多组第二合并相位信息;根据多组第二合并相位信息,生成与像素阵列对应的第二尺寸相位阵列,第二尺寸相位阵列的大小为2×1个像素的大小。
在一个实施例中,第二尺寸相位阵列生成单元,还用于将相邻的三个第二合并相位信息再次进行合并,生成目标相位信息;其中,用于生成相邻的三个第二合并相位信息的子像素在目标像素中处于同一位置;根据目标相位信息,生成像素阵列在第一方向的第二尺寸相位阵列。
在一个实施例中,若目标像素组包括彩色像素组及全色像素组,则第二尺寸相位阵列生成单元,还用于根据当前拍摄场景的光线强度确定彩色像素组对应的第三相位权重以及全色像素组对应的第四相位权重;其中,彩色像素组在不同的光线强度下所对应的第三相位权重不同,全色像素组在不同的光线强度下所对应的第四相位权重不同;基于彩色像素组的目标相位信息及第三相位权重、全色像素组的目标相位信息及第四相位权重,生成像素阵列在第二方向的第二尺寸相位阵列。
在一个实施例中,目标像素组中的每个目标像素对应的至少两个感光元件沿第一方向排列;若当前拍摄场景的光线强度小于或等于第三预设阈值,则相位阵列输出模块1740,包括:
目标像素组确定单元,还用于将像素阵列中沿第二对角线方向相邻的两个全色像素组作为目标像素组;
相位信息获取单元,还用于针对目标像素组中的每个全色像素组,从全色像素组中沿第二方向获取多组相邻的三个子像素的相位信息;其中,第二方向与第一方向相互垂直;
第三尺寸相位阵列生成单元,用于将多组相邻的三个子像素的相位信息进行合并,生成多组第三合并相位信息;根据多组第三合并相位信息,生成与像素阵列对应的第三尺寸相位阵列,第三尺寸相位阵 列的大小为阵列排布的2×1个像素的大小。
在一个实施例中,第三尺寸相位阵列生成单元,还用于将六个第二合并相位信息再次进行合并,生成目标相位信息;其中,用于生成六个第二合并相位信息的子像素在目标像素中处于同一子位置;根据目标相位信息,生成像素阵列在第一方向的第三尺寸相位阵列。
在一个实施例中,提供了一种对焦控制装置,还包括:
目标像素阵列确定模块,用于根据用于对焦控制的像素阵列的预设提取比例及预设提取位置,从图像传感器中的多个像素阵列中确定目标像素阵列;
相位阵列输出模块1740,还用于按照相位信息输出模式,输出与目标像素阵列对应的相位阵列。
在一个实施例中,提供了一种对焦控制装置,还包括:
阈值确定模块,用于根据曝光参数及像素的尺寸确定光线强度的第一预设阈值、第二预设阈值及第三预设阈值。
应该理解的是,虽然上述流程图中的各个操作按照箭头的指示依次显示,但是这些操作并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些操作的执行并没有严格的顺序限制,这些操作可以以其它的顺序执行。而且,上述流程图中的至少一部分操作可以包括多个子操作或者多个阶段,这些子操作或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子操作或者阶段的执行顺序也不必然是依次进行,而是可以与其它操作或者其它操作的子操作或者阶段的至少一部分轮流或者交替地执行。
上述对焦控制装置中各个模块的划分仅仅用于举例说明,在其他实施例中,可将对焦控制装置按照需要划分为不同的模块,以完成上述对焦控制装置的全部或部分功能。
关于对焦控制装置的具体限定可以参见上文中对于对焦控制方法的限定,在此不再赘述。上述对焦控制装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
图19为一个实施例中电子设备的内部结构示意图。该电子设备可以是手机、平板电脑、笔记本电脑、台式电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑、穿戴式设备等任意终端设备。该电子设备包括通过系统总线连接的处理器和存储器。其中,该处理器可以包括一个或多个处理单元。处理器可为CPU(Central Processing Unit,中央处理单元)或DSP(Digital Signal Processing,数字信号处理器)等。存储器可包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种对焦控制方法。内存储器为非易失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。
本申请实施例中提供的对焦控制装置中的各个模块的实现可为计算机程序的形式。该计算机程序可在电子设备上运行。该计算机程序构成的程序模块可存储在电子设备的存储器上。该计算机程序被处理器执行时,实现本申请实施例中所描述方法的操作。
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当计算机可执行指令被一个或多个处理器执行时,使得处理器执行对焦控制方法的操作。
本申请实施例还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行对焦控制方法。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。非易失性存储器可包括ROM(Read-Only Memory,只读存储器)、PROM(Programmable Read-only Memory,可编程只读存储器)、EPROM(Erasable Programmable Read-Only Memory,可擦除可编程只读存储器)、EEPROM(Electrically Erasable Programmable Read-only Memory,电可擦除可编程只读存储器)或闪存。易失性存储器可包括RAM(Random Access Memory,随机存取存储器),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如SRAM(Static Random Access Memory,静态 随机存取存储器)、DRAM(Dynamic Random Access Memory,动态随机存取存储器)、SDRAM(Synchronous Dynamic Random Access Memory,同步动态随机存取存储器)、双数据率DDR SDRAM(Double Data Rate Synchronous Dynamic Random Access memory,双数据率同步动态随机存取存储器)、ESDRAM(Enhanced Synchronous Dynamic Random Access memory,增强型同步动态随机存取存储器)、SLDRAM(Sync Link Dynamic Random Access Memory,同步链路动态随机存取存储器)、RDRAM(Rambus Dynamic Random Access Memory,总线式动态随机存储器)、DRDRAM(Direct Rambus Dynamic Random Access Memory,接口动态随机存储器)。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (26)

  1. 一种图像传感器,其特征在于,所述图像传感器包括微透镜阵列、像素阵列及滤光片阵列,所述滤光片阵列包括最小重复单元,所述最小重复单元包括多个滤光片组,所述滤光片组包括彩色滤光片和全色滤光片;所述彩色滤光片具有比所述全色滤光片更窄的光谱响应,所述彩色滤光片和所述全色滤光片均包括阵列排布的9个子滤光片;
    其中,所述像素阵列包括多个全色像素组和多个彩色像素组,每个所述全色像素组对应所述全色滤光片,每个所述彩色像素组对应所述彩色滤光片;所述全色像素组和所述彩色像素组均包括9个像素,所述微透镜阵列中的微透镜、像素阵列的像素与所述滤光片阵列的子滤光片对应设置,且每个像素包括阵列排布的至少两个子像素,每个子像素对应一个感光元件。
  2. 根据权利要求1所述的图像传感器,其特征在于,所述滤光片组为4个,4个所述滤光片组呈矩阵排列。
  3. 根据权利要求2所述的图像传感器,其特征在于,在每个所述滤光片组中,所述全色滤光片设置在第一对角线方向,所述彩色滤光片设置在第二对角线方向,所述第一对角线方向与所述第二对角线方向不同。
  4. 根据权利要求3所述的图像传感器,其特征在于,所述滤光片组中包含2个全色滤光片和2个彩色滤光片,所述最小重复单元为12行12列144个所述子滤光片,排布方式为:
    Figure PCTCN2022120545-appb-100001
    其中,w表示全色子滤光片,a、b和c均表示彩色子滤光片。
  5. 根据权利要求3所述的图像传感器,其特征在于,所述滤光片组中包含2个全色滤光片和2个彩色滤光片,所述最小重复单元为12行12列144个所述子滤光片,排布方式为:
    Figure PCTCN2022120545-appb-100002
    其中,w表示全色子滤光片,a、b和c均表示彩色子滤光片。
  6. 根据权利要求1所述的图像传感器,其特征在于,所述像素对应的至少两个感光元件呈中心对 称方式排布。
  7. 根据权利要求1所述的图像传感器,其特征在于,所述至少两个感光元件的形状为矩形、梯形、三角形及L形中的任意一种。
  8. 一种对焦控制方法,其特征在于,应用于如权利要求1至7中任一项所述的图像传感器,所述方法包括:
    根据当前拍摄场景的光线强度,确定与所述当前拍摄场景的光线强度适配的相位信息输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同;
    按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列;其中,所述相位阵列包括所述像素阵列中目标像素对应的相位信息;
    基于所述相位阵列计算所述像素阵列的相位差,并根据所述相位差进行对焦控制。
  9. 根据权利要求8所述的方法,其特征在于,所述根据当前拍摄场景的光线强度,确定与所述当前拍摄场景的光线强度适配的相位信息输出模式,包括:
    确定当前拍摄场景的光线强度所属的目标光线强度范围;其中,不同的光线强度范围对应不同的相位信息输出模式;
    根据所述目标光线强度范围,确定与所述当前拍摄场景的光线强度适配的相位信息输出模式。
  10. 根据权利要求9所述的方法,其特征在于,所述相位信息输出模式包括全尺寸输出模式及第一尺寸输出模式,所述全尺寸输出模式下的相位阵列的大小大于所述第一尺寸输出模式下相位阵列的大小;
    所述根据所述目标光线强度范围,确定与所述当前拍摄场景的光线强度适配的相位信息输出模式,包括:
    若所述当前拍摄场景的光线强度大于第一预设阈值,则确定与所述当前拍摄场景的光线强度适配的相位信息输出模式为所述全尺寸输出模式;
    若所述当前拍摄场景的光线强度大于第二预设阈值,且小于或等于所述第一预设阈值,则确定与所述当前拍摄场景的光线强度适配的相位信息输出模式为所述第一尺寸输出模式;所述第一预设阈值大于所述第二预设阈值。
  11. 根据权利要求10所述的方法,其特征在于,若所述相位信息输出模式为全尺寸输出模式,则所述按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列,包括:
    将所述像素阵列中的彩色像素组作为目标像素组;所述目标像素组中包括目标像素;
    针对各所述目标像素组,获取所述目标像素的子像素的相位信息;
    根据所述目标像素的子像素的相位信息,生成与所述像素阵列对应的全尺寸相位阵列;所述全尺寸相位阵列的大小为阵列排布的6×3个像素的大小。
  12. 根据权利要求10所述的方法,其特征在于,所述目标像素组中的每个目标像素对应的至少两个感光元件沿第一方向排列;若所述相位信息输出模式为第一尺寸输出模式,则所述按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列,包括:
    将所述像素阵列中的彩色像素组、全色像素组中的至少一种作为目标像素组;
    针对每个目标像素组,沿第二方向以一个子像素作为滑窗步长获取多组相邻的两个子像素的相位信息;其中,所述第二方向与所述第一方向相互垂直;
    将所述多组相邻的两个子像素的相位信息进行合并,生成多组第一合并相位信息;
    根据所述多组第一合并相位信息,生成与所述像素阵列对应的第一尺寸相位阵列,所述第一尺寸相位阵列的大小为阵列排布的4×2个像素的大小。
  13. 根据权利要求12所述的方法,其特征在于,所述根据所述多组第一合并相位信息,生成与所述像素阵列对应的第一尺寸相位阵列,包括:
    将多组相邻的两个第一合并相位信息再次进行合并,生成目标相位信息;其中,用于生成所述相邻的两个第一合并相位信息的子像素在目标像素中处于同一位置;
    根据所述目标相位信息,生成所述像素阵列的第一尺寸相位阵列。
  14. 根据权利要求13所述的方法,其特征在于,若所述目标像素组包括彩色像素组及全色像素组,则所述根据所述目标相位信息,生成所述像素阵列的第一尺寸相位阵列,包括:
    根据所述当前拍摄场景的光线强度确定所述彩色像素组对应的第一相位权重以及所述全色像素组对应的第二相位权重;其中,所述彩色像素组在不同的光线强度下所对应的第一相位权重不同,所述全色像素组在不同的光线强度下所对应的第二相位权重不同;
    基于所述彩色像素组的目标相位信息及所述第一相位权重、所述全色像素组的目标相位信息及所述第二相位权重,生成所述像素阵列的第一尺寸相位阵列。
  15. 根据权利要求10所述的方法,其特征在于,所述相位信息输出模式还包括第二尺寸输出模式及第三尺寸输出模式;其中,所述第一尺寸输出模式下的相位阵列的大小大于所述第二尺寸输出模式下相位阵列的大小;所述第二尺寸输出模式下的相位阵列的大小大于所述第三尺寸输出模式下相位阵列的大小;
    所述根据所述目标光线强度范围确定与所述当前拍摄场景的光线强度适配的相位信息输出模式,包括:
    若所述当前拍摄场景的光线强度大于第三预设阈值,且小于或等于所述第二预设阈值,则确定与所述当前拍摄场景的光线强度适配的相位信息输出模式为所述第二尺寸输出模式;
    若所述当前拍摄场景的光线强度小于或等于第三预设阈值,则确定与所述当前拍摄场景的光线强度适配的相位信息输出模式为所述第三尺寸输出模式;所述第二预设阈值大于所述第三预设阈值。
  16. 根据权利要求15所述的方法,其特征在于,所述目标像素组中的每个目标像素对应的至少两个感光元件沿第一方向排列;若所述相位信息输出模式为第二尺寸输出模式,则所述按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列,包括:
    将所述像素阵列中的彩色像素组及全色像素组作为目标像素组,或将所述全色像素组作为目标像素组;
    针对每个目标像素组,沿第二方向获取多组相邻的三个子像素的相位信息;
    将所述多组相邻的三个子像素的相位信息进行合并,生成多组第二合并相位信息;
    根据所述多组第二合并相位信息,生成与所述像素阵列对应的第二尺寸相位阵列,所述第二尺寸相位阵列的大小为2×1个像素的大小。
  17. 根据权利要求16所述的方法,其特征在于,所述根据所述多组第二合并相位信息,生成与所述像素阵列对应的第二尺寸相位阵列,包括:
    将相邻的三个所述第二合并相位信息再次进行合并,生成目标相位信息;其中,用于生成所述相邻的三个第二合并相位信息的子像素在目标像素中处于同一位置;
    根据所述目标相位信息,生成所述像素阵列的第二尺寸相位阵列。
  18. 根据权利要求17所述的方法,其特征在于,若所述目标像素组包括彩色像素组及全色像素组,则所述根据所述目标相位信息,生成所述像素阵列的第二尺寸相位阵列,包括:
    根据所述当前拍摄场景的光线强度确定所述彩色像素组对应的第三相位权重以及所述全色像素组对应的第四相位权重;其中,所述彩色像素组在不同的光线强度下所对应的第三相位权重不同,所述全色像素组在不同的光线强度下所对应的第四相位权重不同;
    基于所述彩色像素组的目标相位信息及所述第三相位权重、所述全色像素组的目标相位信息及所述第四相位权重,生成所述像素阵列的第二尺寸相位阵列。
  19. 根据权利要求15所述的方法,其特征在于,所述目标像素组中的每个目标像素对应的至少两个感光元件沿第一方向排列;若所述当前拍摄场景的光线强度小于或等于所述第三预设阈值,则所述按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列,包括:
    将所述像素阵列中沿第二对角线方向相邻的两个全色像素组作为目标像素组;
    针对所述目标像素组中的每个全色像素组,从所述全色像素组中沿所述第二方向获取多组相邻的三个子像素的相位信息;其中,所述第二方向与所述第一方向相互垂直;
    将所述多组相邻的三个子像素的相位信息进行合并,生成多组第三合并相位信息;
    根据所述多组第三合并相位信息,生成与所述像素阵列对应的第三尺寸相位阵列,所述第三尺寸相位阵列的大小为阵列排布的2×1个像素的大小。
  20. 根据权利要求19所述的方法,其特征在于,所述根据所述第三合并相位信息,生成与所述像素阵列对应的第三尺寸相位阵列,包括:
    将六个第二合并相位信息再次进行合并,生成目标相位信息;其中,用于生成所述六个第二合并相位信息的子像素在目标像素中处于同一子位置;
    根据所述目标相位信息,生成所述像素阵列的第三尺寸相位阵列。
  21. 根据权利要求8所述的方法,其特征在于,在所述按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列之前,还包括:
    根据用于对焦控制的像素阵列的预设提取比例及预设提取位置,从所述图像传感器中的多个像素阵列中确定目标像素阵列;
    所述按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列,包括:
    按照所述相位信息输出模式,输出与所述目标像素阵列对应的相位阵列。
  22. 根据权利要求15所述的方法,其特征在于,所述方法还包括:
    根据曝光参数及所述像素的尺寸确定所述光线强度的第一预设阈值、第二预设阈值及第三预设阈值。
  23. 一种对焦控制装置,其特征在于,应用于如权利要求1至7中任一项所述的图像传感器,所述装置包括:
    相位信息输出模式确定模块,用于根据当前拍摄场景的光线强度,确定与所述当前拍摄场景的光线强度适配的相位信息输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同;
    相位阵列输出模块,用于按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列;其中,所述相位阵列包括所述像素阵列中目标像素对应的相位信息;
    对焦控制模块,用于基于所述相位阵列计算所述像素阵列的相位差,并根据所述相位差进行对焦控制。
  24. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器执行如权利要求8至22中任一项所述的对焦控制方法的操作。
  25. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求8至22中任一项所述的对焦控制方法的操作。
  26. 一种计算机程序产品,包括计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行时实现如权利要求8至22中任一项所述的对焦控制方法的操作。
PCT/CN2022/120545 2021-11-22 2022-09-22 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质 WO2023087908A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111383861.9A CN113891006A (zh) 2021-11-22 2021-11-22 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质
CN202111383861.9 2021-11-22

Publications (1)

Publication Number Publication Date
WO2023087908A1 true WO2023087908A1 (zh) 2023-05-25

Family

ID=79015948

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/120545 WO2023087908A1 (zh) 2021-11-22 2022-09-22 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN113891006A (zh)
WO (1) WO2023087908A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113660415A (zh) * 2021-08-09 2021-11-16 Oppo广东移动通信有限公司 对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质
CN113891006A (zh) * 2021-11-22 2022-01-04 Oppo广东移动通信有限公司 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110677575A (zh) * 2019-11-12 2020-01-10 Oppo广东移动通信有限公司 图像传感器、相机模组和终端
CN111586323A (zh) * 2020-05-07 2020-08-25 Oppo广东移动通信有限公司 图像传感器、控制方法、摄像头组件和移动终端
CN112118378A (zh) * 2020-10-09 2020-12-22 Oppo广东移动通信有限公司 图像获取方法及装置、终端和计算机可读存储介质
CN213279832U (zh) * 2020-10-09 2021-05-25 Oppo广东移动通信有限公司 图像传感器、相机和终端
CN113573030A (zh) * 2021-07-01 2021-10-29 Oppo广东移动通信有限公司 图像生成方法、装置、电子设备和计算机可读存储介质
CN113891006A (zh) * 2021-11-22 2022-01-04 Oppo广东移动通信有限公司 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114845015A (zh) * 2020-10-15 2022-08-02 Oppo广东移动通信有限公司 图像传感器、控制方法、成像装置、终端及可读存储介质
CN113660415A (zh) * 2021-08-09 2021-11-16 Oppo广东移动通信有限公司 对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110677575A (zh) * 2019-11-12 2020-01-10 Oppo广东移动通信有限公司 图像传感器、相机模组和终端
CN111586323A (zh) * 2020-05-07 2020-08-25 Oppo广东移动通信有限公司 图像传感器、控制方法、摄像头组件和移动终端
CN112118378A (zh) * 2020-10-09 2020-12-22 Oppo广东移动通信有限公司 图像获取方法及装置、终端和计算机可读存储介质
CN213279832U (zh) * 2020-10-09 2021-05-25 Oppo广东移动通信有限公司 图像传感器、相机和终端
CN113573030A (zh) * 2021-07-01 2021-10-29 Oppo广东移动通信有限公司 图像生成方法、装置、电子设备和计算机可读存储介质
CN113891006A (zh) * 2021-11-22 2022-01-04 Oppo广东移动通信有限公司 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质

Also Published As

Publication number Publication date
CN113891006A (zh) 2022-01-04

Similar Documents

Publication Publication Date Title
WO2023087908A1 (zh) 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质
US10136107B2 (en) Imaging systems with visible light sensitive pixels and infrared light sensitive pixels
TWI504257B (zh) 在產生數位影像中曝光像素群組
KR102684722B1 (ko) 이미지 센서 및 그 동작 방법
US20180131862A1 (en) Optimized phase detection autofocus (pdaf) processing
US10128284B2 (en) Multi diode aperture simulation
US20130278802A1 (en) Exposure timing manipulation in a multi-lens camera
US9729806B2 (en) Imaging systems with phase detection pixels
US7876363B2 (en) Methods, systems and apparatuses for high-quality green imbalance compensation in images
WO2023016144A1 (zh) 对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质
US20230046521A1 (en) Control method, camera assembly, and mobile terminal
KR20140141390A (ko) 이미지 센서 및 이를 포함하는 촬상 장치
EP3902242B1 (en) Image sensor and signal processing method
US9350920B2 (en) Image generating apparatus and method
US11245878B2 (en) Quad color filter array image sensor with aperture simulation and phase detection
US9813687B1 (en) Image-capturing device, image-processing device, image-processing method, and image-processing program
KR102632474B1 (ko) 이미지 센서의 픽셀 어레이 및 이를 포함하는 이미지 센서
US10931902B2 (en) Image sensors with non-rectilinear image pixel arrays
WO2023035900A1 (zh) 图像传感器、图像生成方法、装置和电子设备
WO2023124607A1 (zh) 图像生成方法、装置、电子设备和计算机可读存储介质
WO2023082766A1 (zh) 图像传感器、摄像模组、电子设备、图像生成方法和装置
US20230007191A1 (en) Image sensor, imaging apparatus, electronic device, image processing system, and signal processing method
US9497427B2 (en) Method and apparatus for image flare mitigation
WO2023124611A1 (zh) 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质
US10182186B2 (en) Image capturing apparatus and control method thereof

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE