WO2023124611A1 - 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质 - Google Patents

对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质 Download PDF

Info

Publication number
WO2023124611A1
WO2023124611A1 PCT/CN2022/132425 CN2022132425W WO2023124611A1 WO 2023124611 A1 WO2023124611 A1 WO 2023124611A1 CN 2022132425 W CN2022132425 W CN 2022132425W WO 2023124611 A1 WO2023124611 A1 WO 2023124611A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
pixel
phase information
array
phase
Prior art date
Application number
PCT/CN2022/132425
Other languages
English (en)
French (fr)
Inventor
杨鑫
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2023124611A1 publication Critical patent/WO2023124611A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals

Definitions

  • the present application relates to the technical field of image processing, and in particular to a focus control method, device, image sensor, electronic equipment, and computer-readable storage medium.
  • phase detection auto focus English: phase detection auto focus; short: PDAF.
  • the traditional phase detection autofocus is mainly based on the RGB pixel array to calculate the phase difference, and then control the motor based on the phase difference, and then the motor drives the lens to move to a suitable position for focusing, so that the subject is imaged on the focal plane.
  • Embodiments of the present application provide a focus control method, device, electronic device, image sensor, and computer-readable storage medium, which can improve focus accuracy.
  • an image sensor includes a microlens array, a pixel array, and a filter array
  • the filter array includes a minimum repeating unit, and the minimum repeating unit includes a plurality of filter groups
  • the filter set includes a color filter and a panchromatic filter
  • the color filter has a narrower spectral response than the panchromatic filter
  • the color filter and the panchromatic filter Each color filter includes 9 sub-filters arranged in an array
  • the pixel array includes a plurality of pixel groups, the pixel groups are panchromatic pixel groups or color pixel groups, each of the panchromatic pixel groups corresponds to the panchromatic filter, and each of the color pixel groups Corresponding to the color filter; the panchromatic pixel group and the color pixel group both include 9 pixels, the pixels of the pixel array are set corresponding to the sub-filters of the filter array, and each A pixel corresponds to a photosensitive element;
  • the microlens array includes a plurality of microlens groups, each microlens group corresponds to the panchromatic pixel group or the color pixel group, the microlens group includes a plurality of microlenses, and the plurality of microlenses At least one microlens in the lens corresponds to at least two pixels.
  • a focus control method which is applied to the image sensor as described above, and the method includes:
  • phase information output mode adapted to the light intensity of the current shooting scene; wherein, in different phase information output modes, the sizes of the output phase arrays are different;
  • phase information output mode output a phase array corresponding to the pixel array; wherein, the phase array includes phase information corresponding to a target pixel in the pixel array;
  • the phase difference of the pixel array is calculated based on the phase array, and focus control is performed according to the phase difference.
  • a focus control device which is applied to the image sensor as described above, and the device includes:
  • a phase information output mode determination module configured to determine a phase information output mode adapted to the light intensity of the current shooting scene according to the light intensity of the current shooting scene; wherein, in different phase information output modes, the output phase The arrays are of different sizes;
  • a phase array output module configured to output a phase array corresponding to the pixel array according to the phase information output mode; wherein, the phase array includes phase information corresponding to a target pixel in the pixel array;
  • a focus control module configured to calculate the phase difference of the pixel array based on the phase array, and perform focus control according to the phase difference.
  • an electronic device including a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor executes the focus control as described above The operation of the method.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the operations of the method as described above are implemented.
  • a computer program product including computer programs/instructions, characterized in that, when the computer program/instructions are executed by a processor, operations of the focusing control method as described above are implemented.
  • Fig. 1 is a schematic structural diagram of an electronic device in an embodiment
  • FIG. 2 is a schematic diagram of the principle of phase detection autofocus
  • 3 is a schematic diagram of setting phase detection pixels in pairs among the pixels included in the image sensor
  • Fig. 4 is an exploded schematic diagram of an image sensor in an embodiment
  • Fig. 5 is a schematic diagram of connection between a pixel array and a readout circuit in an embodiment
  • Fig. 6 is a schematic diagram of the arrangement of the smallest repeating unit of the pixel array in an embodiment
  • Fig. 7 is a schematic diagram of the arrangement of the smallest repeating unit of the pixel array in another embodiment
  • Figure 8 is a schematic diagram of the arrangement of the smallest repeating unit of the microlens array in one embodiment
  • FIG. 9 is a schematic diagram of the arrangement of the smallest repeating unit of the microlens array in another embodiment.
  • FIG. 10 is a flowchart of a focus control method in an embodiment
  • FIG. 11 is a flowchart of a method for outputting a phase array corresponding to a pixel array according to a phase information output mode in an embodiment
  • FIG. 12 is a flowchart of a method for determining a phase information output mode adapted to the light intensity of the current shooting scene according to the light intensity of the current shooting scene in FIG. 11 ;
  • FIG. 13 is a flowchart of a method for determining a phase information output mode adapted to the light intensity of the current shooting scene according to the target light intensity range in FIG. 12;
  • Figure 14 is a schematic diagram of generating a full-scale phase array in one embodiment
  • Figure 15 is a flow chart of a method for generating a phased array of a first size in one embodiment
  • Fig. 16 is a schematic diagram of generating a second-size phase array in one embodiment
  • Fig. 17 is a structural block diagram of a focus control device in an embodiment
  • Fig. 18 is a structural block diagram of the phase array output module in Fig. 17;
  • Fig. 19 is a schematic diagram of the internal structure of an electronic device in one embodiment.
  • first, second, third and the like used in this application may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another element.
  • a first dimension could be termed a second dimension, and, similarly, a second dimension could be termed a first dimension, without departing from the scope of the present application.
  • Both the first size and the second size are sizes, but they are not the same size.
  • the first preset threshold may be referred to as a second preset threshold, and similarly, the second preset threshold may be referred to as a first preset threshold. Both the first preset threshold and the second preset threshold are preset thresholds, but they are not the same preset threshold.
  • FIG. 1 is a schematic diagram of an application environment of a focus control method in an embodiment.
  • the application environment includes an electronic device 100 .
  • the electronic device 100 includes an image sensor, the image sensor includes a pixel array, and the electronic device determines a phase information output mode adapted to the light intensity of the current shooting scene according to the light intensity of the current shooting scene; wherein, in different phase information output modes, The size of the output phase array is different; according to the phase information output mode, the phase array corresponding to the pixel array is output; wherein, the phase array includes the phase information corresponding to the pixel array; the phase difference of the pixel array is calculated based on the phase array, and according to the phase difference Performs focus control.
  • electronic devices can be mobile phones, tablet computers, PDA (Personal Digital Assistant, personal digital assistant), wearable devices (smart bracelets, smart watches, smart glasses, smart gloves, smart socks, smart belts, etc.), VR (virtual reality, virtual reality) devices, smart homes, driverless cars and other terminal devices with image processing functions.
  • PDA Personal Digital Assistant, personal digital assistant
  • wearable devices smart bracelets, smart watches, smart glasses, smart gloves, smart socks, smart belts, etc.
  • VR virtual reality, virtual reality
  • smart homes driverless cars and other terminal devices with image processing functions.
  • the electronic device 100 includes a camera 20 , a processor 30 and a casing 40 .
  • Both the camera 20 and the processor 30 are arranged in the casing 40, and the casing 40 can also be used to install functional modules such as a power supply device and a communication device of the terminal 100, so that the casing 40 provides dustproof, dropproof, waterproof, etc. for the functional modules.
  • the camera 20 may be a front camera, a rear camera, a side camera, an under-screen camera, etc., which is not limited here.
  • the camera 20 includes a lens and an image sensor 21. When the camera 20 captures an image, light passes through the lens and reaches the image sensor 21.
  • the image sensor 21 is used to convert the light signal irradiated on the image sensor 21 into an electrical signal.
  • Fig. 2 is a schematic diagram of the principle of phase detection auto focus (PDAF).
  • M1 is the position of the image sensor when the imaging device is in the in-focus state, wherein the in-focus state refers to a state of successful focus. If the image sensor is located at the M1 position, the imaging light g reflected from the object W to the lens Lens in different directions converges on the image sensor, that is, the imaging light g reflected from the object W to the lens Lens in different directions is in the image The image is formed at the same position on the sensor, and at this time, the image of the image sensor is clear.
  • M2 and M3 are the possible positions of the image sensor when the imaging device is not in focus.
  • the image sensor if the image sensor is located at the M2 position or the M3 position, the reflections from the object W to the lens Lens in different directions The imaging ray g will be imaged at different positions.
  • the imaging light g reflected by the object W to the lens Lens in different directions will be imaged at position A and position B respectively; if the image sensor is at the position M3, the imaging light g reflected by the object W will The imaging rays g in different directions of the lens Lens respectively form images at positions C and D, and at this time, the image sensor images are not clear.
  • the difference in position of the image formed by the imaging rays entering the lens from different directions in the image sensor can be obtained, for example, as shown in Figure 2, the difference between position A and position B can be obtained, or, Obtain the difference between position C and position D; after obtaining the difference in position of the image formed by the imaging light that enters the lens from different directions in the image sensor, it can be based on the difference and the difference between the lens and the image sensor in the camera Geometric relationship, the defocus distance is obtained.
  • the so-called defocus distance refers to the distance between the current position of the image sensor and the position where the image sensor should be in the in-focus state; the imaging device can focus according to the obtained defocus distance.
  • the calculated PD value is 0.
  • the larger the calculated value the farther the distance from the focal point is, and the smaller the value, the closer the focal point is.
  • phase detection pixel points can be set in pairs among the pixel points included in the image sensor.
  • the image sensor can be provided with phase detection pixel point pairs (hereinafter referred to as pixel point pairs) A Pixel pair B and pixel pair C.
  • pixel point pairs phase detection pixel point pairs
  • one phase detection pixel performs left shielding (English: Left Shield)
  • the other phase detection pixel performs right shielding (English: Right Shield).
  • the imaging beam can be divided into left and right parts, and the phase difference can be obtained by comparing the images formed by the left and right parts of the imaging beam.
  • the image sensor includes a pixel array and a filter array
  • the filter array includes a minimum repeating unit
  • the minimum repeating unit includes a plurality of filter groups
  • the filter groups include color filters and Panchromatic filter
  • the color filter is arranged in the first diagonal direction in the filter group
  • the panchromatic filter is arranged in the second diagonal direction, the first diagonal direction and the second diagonal direction
  • the line direction is different
  • the color filter has a narrower spectral response than the panchromatic filter
  • both the color filter and the panchromatic filter include 9 sub-filters arranged in an array
  • the pixel array includes a plurality of panchromatic pixel groups and a plurality of color pixel groups, each panchromatic pixel group corresponds to a panchromatic filter, and each color pixel group corresponds to a color filter; the panchromatic pixel group and the color pixel group Each of them includes 9 pixels, the pixels of the pixel array are arranged corresponding to the sub-filters of the filter array, and each pixel corresponds to a photosensitive element.
  • the microlens array includes a plurality of microlens groups, each microlens group corresponds to a panchromatic pixel group or a color pixel group, the microlens group includes a plurality of microlenses, and at least one microlens in the plurality of microlenses corresponds to at least two pixels.
  • the phase difference of the pixel array can be calculated based on the phase information of the at least two pixels.
  • the image sensor 21 includes a microlens array 22 , a filter array 23 , and a pixel array 24 .
  • the microlens array 22 includes a plurality of minimal repeating units 221 , the smallest repeating unit 221 includes a plurality of microlens groups 222 , and the microlens group 222 includes a plurality of microlenses 2221 .
  • the sub-filters in the filter array 23 correspond to the pixels in the pixel array 24 one by one, and at least one microlens in the plurality of microlenses 2221 corresponds to at least two pixels.
  • the microlens 2221 is used to gather the incident light, and the gathered light will pass through the corresponding sub-filter, and then be projected onto the pixel, and be received by the corresponding pixel, and the pixel will convert the received light into an electrical signal.
  • the filter array 23 includes a plurality of minimal repeating units 231 .
  • the minimum repeating unit 231 may include a plurality of filter sets 232 .
  • Each filter set 232 includes a panchromatic filter 233 and a color filter 234 having a narrower spectral response than the panchromatic filter 233 .
  • Each panchromatic filter 233 includes 9 sub-filters 2331
  • each color filter 234 includes 9 sub-filters 2341 .
  • Different color filters 234 are also included in different filter sets.
  • the colors corresponding to the wavelength bands of the transmitted light of the color filters 234 of the filter sets 232 in the minimum repeating unit 231 include color a, color b and/or color c.
  • the color corresponding to the wavelength band of the transmitted light of the color filter 234 of the filter group 232 includes color a, color b and color c, or color a, color b or color c, or color a and color b, or color b and color c, or color a and color c.
  • the color a is red
  • the color b is green
  • the color c is blue, or for example, the color a is magenta, the color b is cyan, and the color c is yellow, etc., which are not limited here.
  • the width of the wavelength band of the light transmitted by the color filter 234 is smaller than the width of the wavelength band of the light transmitted by the panchromatic filter 233, for example, the wavelength band of the light transmitted by the color filter 234 It can correspond to the wavelength band of red light, the wavelength band of green light, or the wavelength band of blue light.
  • the wavelength band of the light transmitted by the panchromatic filter 233 is the wavelength band of all visible light, that is to say, the color filter 234 only allows specific color light
  • the panchromatic filter 233 can pass light of all colors.
  • the wavelength band of the light transmitted by the color filter 234 may also correspond to the wavelength band of other colored light, such as magenta light, purple light, cyan light, yellow light, etc., which is not limited here.
  • the ratio of the number of color filters 234 to the number of panchromatic filters 233 in the filter set 232 may be 1:3, 1:1 or 3:1. For example, if the ratio of the number of color filters 234 to the number of panchromatic filters 233 is 1:3, then the number of color filters 234 is 1, and the number of panchromatic filters 233 is 3.
  • the number of color filters 233 is large, compared with the traditional situation of only color filters, more phase information can be obtained through the panchromatic filters 233 in dark light, so the focusing quality is better; or, The ratio of the quantity of the color filter 234 and the quantity of the panchromatic filter 233 is 1:1, then the quantity of the color filter 234 is 2, and the quantity of the panchromatic filter 233 is 2, now both can obtain At the same time of better color performance, more phase information can be obtained through the panchromatic filter 233 in dark light, so the focus quality is also better; or, the number of color filters 234 and the panchromatic filter The ratio of the number of 233 is 3:1, then the number of color filters 234 is 3, and the number of panchromatic filters 233 is 1. At this time, better color performance can be obtained, and the dark light can also be improved in the same way. focus quality.
  • the pixel array 24 includes a plurality of pixels, and the pixels of the pixel array 24 are arranged corresponding to the sub-filters of the filter array 23 .
  • the pixel array 24 is configured to receive light passing through the filter array 23 to generate electrical signals.
  • the pixel array 24 is configured to receive the light passing through the filter array 23 to generate an electrical signal, which means that the pixel array 24 is used to detect a scene of a given set of subjects passing through the filter array 23
  • the light is photoelectrically converted to generate an electrical signal.
  • the light rays of the scene for a given set of subjects are used to generate image data.
  • the subject is a building
  • the scene of a given set of subjects refers to the scene where the building is located, which may also contain other objects.
  • the pixel array 24 can be an RGBW pixel array, including a plurality of minimum repeating units 241, the minimum repeating unit 241 includes a plurality of pixel groups 242, and the plurality of pixel groups 242 includes a panchromatic pixel group 243 and a color pixel group 244 .
  • Each panchromatic pixel group 243 includes 9 panchromatic pixels 2431
  • each color pixel group 244 includes 9 color pixels 2441 .
  • Each panchromatic pixel 2431 corresponds to a sub-filter 2331 in the panchromatic filter 233, and the panchromatic pixel 2431 receives light passing through the corresponding sub-filter 2331 to generate an electrical signal.
  • Each color pixel 2441 corresponds to a sub-filter 2341 of the color filter 234, and the color pixel 2441 receives light passing through the corresponding sub-filter 2341 to generate an electrical signal.
  • each pixel corresponds to a photosensitive element. That is, each panchromatic pixel 2431 corresponds to a photosensitive element; each color pixel 2441 corresponds to a photosensitive element.
  • the image sensor 21 in this embodiment includes a filter array 23 and a pixel array 24, the filter array 23 includes a minimum repeating unit 231, the minimum repeating unit 231 includes a plurality of filter groups 232, and the filter group includes panchromatic filters
  • the light sheet 233 and the color filter 234, the color filter 234 has a narrower spectral response than the panchromatic filter 233, and more light can be obtained when shooting, so there is no need to adjust the shooting parameters without affecting
  • the focusing quality in low light is improved.
  • both stability and quality can be taken into account.
  • the stability and quality of focusing in low light are both high.
  • each panchromatic filter 233 includes 9 sub-filters 2331
  • each color filter 234 includes 9 sub-filters 2341
  • the pixel array 24 includes a plurality of panchromatic pixels 2431 and a plurality of color pixels 2441
  • each panchromatic pixel 2431 corresponds to a sub-filter 2331 of the panchromatic filter 233
  • each color pixel 2441 corresponds to a sub-filter 2341 of the color filter 234
  • the panchromatic pixel 2431 and the color pixel 2441 It is used to receive light passing through the corresponding sub-filters to generate electrical signals.
  • the phase information of the pixels corresponding to the 9 sub-filters can be combined and output to obtain phase information with a high signal-to-noise ratio.
  • phase information of the pixel corresponding to each sub-filter can be output separately, so as to obtain phase information with high resolution and signal-to-noise ratio, which can adapt to different application scenarios. And it can improve the focus quality in various scenes.
  • the smallest repeating unit 231 in the filter array 23 includes 4 filter groups 232 , and the 4 filter groups 232 are arranged in a matrix.
  • Each filter group 232 comprises a panchromatic filter 233 and a color filter 234, each panchromatic filter 233 and each color filter 234 have 9 sub-filters, then the filter Group 232 includes a total of 36 sub-filters.
  • the pixel array 24 includes a plurality of minimum repeating units 241 corresponding to the plurality of minimum repeating units 231 .
  • Each minimum repeating unit 241 includes 4 pixel groups 242 , and the 4 pixel groups 242 are arranged in a matrix.
  • Each pixel group 242 corresponds to a filter group 232 .
  • the readout circuit 25 is electrically connected to the pixel array 24 for controlling the exposure of the pixel array 24 and reading and outputting the pixel values of the pixel points.
  • the readout circuit 25 includes a vertical drive unit 251 , a control unit 252 , a column processing unit 253 , and a horizontal drive unit 254 .
  • the vertical driving unit 251 includes a shift register and an address decoder.
  • the vertical driving unit 251 includes readout scanning and reset scanning functions.
  • the control unit 252 configures timing signals according to the operation mode, and uses various timing signals to control the vertical driving unit 251 , the column processing unit 253 and the horizontal driving unit 254 to work together.
  • the column processing unit 253 may have an analog-to-digital (A/D) conversion function for converting an analog pixel signal into a digital format.
  • the horizontal driving unit 254 includes a shift register and an address decoder. The horizontal driving unit 254 sequentially scans the pixel array 24 column by column.
  • each filter group 232 includes a color filter 234 and a panchromatic filter 233, and each panchromatic filter 233 in the filter group 232 is arranged on In the first diagonal direction D1, each color filter 234 in the filter set 232 is arranged in the second diagonal direction.
  • the direction of the first diagonal line D1 and the direction of the second diagonal line D2 are different, which can take into account both color performance and low-light focusing quality.
  • the direction of the first diagonal line D1 is different from the direction of the second diagonal line D2. Specifically, the direction of the first diagonal line D1 is not parallel to the direction of the second diagonal line D2, or the direction of the first diagonal line D1 is not parallel to the direction of the second diagonal line.
  • the direction of the diagonal line D2 is vertical, etc.
  • one color filter 234 and one panchromatic filter 233 can be located on the first diagonal line D1, and the other color filter 234 and another panchromatic filter 233 can be located on the second pair of diagonals. Corner line D2.
  • each pixel corresponds to a photosensitive element.
  • the photosensitive element is an element capable of converting light signals into electrical signals.
  • the photosensitive element can be a photodiode.
  • each panchromatic pixel 2331 includes a photodiode PD (PhotoDiode)), and each color pixel 2341 includes a photodiode PD (PhotoDiode)).
  • the smallest repeating unit 231 in the filter array 23 includes 4 filter groups 232 , and the 4 filter groups 232 are arranged in a matrix.
  • Each filter set 232 includes two panchromatic filters 233 and two color filters 234 .
  • the panchromatic filter 233 includes 9 sub-filters 2331, and the color filter 234 includes 9 sub-filters 2341, then the smallest repeating unit 231 is 12 rows and 12 columns with 144 sub-filters, and the arrangement is as follows:
  • w represents the panchromatic sub-filter 2331
  • a, b and c all represent the color sub-filter 2341 .
  • the panchromatic sub-filter 2331 refers to a sub-filter that can filter out all light rays other than the visible light band
  • the color sub-filter 2341 includes a red sub-filter, a green sub-filter, and a blue sub-filter. filter, magenta sub-filter, cyan sub-filter, and yellow sub-filter.
  • the red sub-filter is a sub-filter for filtering all light except red light
  • the green sub-filter is a sub-filter for filtering all light except green light
  • the blue sub-filter is a sub-filter for filtering A sub-filter for all light except blue
  • a magenta sub-filter for all light except magenta and a cyan sub-filter for all light except cyan A sub-filter for all light rays
  • the yellow sub-filter is a sub-filter for filtering out all light rays except yellow light.
  • a can be red sub-filter, green sub-filter, blue sub-filter, magenta sub-filter, cyan sub-filter or yellow sub-filter
  • b can be red sub-filter, Green sub-filter, blue sub-filter, magenta sub-filter, cyan sub-filter or yellow sub-filter
  • c can be red sub-filter, green sub-filter, blue sub-filter filter, magenta sub-filter, cyan sub-filter, or yellow sub-filter.
  • b is the red sub-filter, a is the green sub-filter, c is the blue sub-filter; or, c is the red sub-filter, a is the green sub-filter, b is the blue sub-filter Filter; another example, c is a red sub-filter, a is a green sub-filter, b is a blue sub-filter; or, a is a red sub-filter, b is a blue sub-filter , c is a green sub-filter, etc., which are not limited here; for another example, b is a magenta sub-filter, a is a cyan sub-filter, b is a yellow sub-filter, etc.
  • the color filter may further include sub-filters of other colors, such as an orange sub-filter, a purple sub-filter, etc., which are not limited here.
  • the minimum repeating unit 231 in the filter array 23 includes 4 filter groups 232 , and the 4 filter groups 232 are arranged in a matrix.
  • Each filter set 232 includes a color filter 234 and a panchromatic filter 233, and each color filter 234 in the filter set 232 is arranged in the direction of the first diagonal line D1, and the filter set 232
  • Each panchromatic filter 233 in is arranged in the direction of the second diagonal line D2.
  • the pixels of the pixel array (not shown in FIG. 7 , refer to FIG. 6 ) are arranged corresponding to the sub-filters of the filter array, and each pixel corresponds to a photosensitive element.
  • each filter set 232 includes 2 panchromatic filters 233 and 2 color filters 234, the panchromatic filters 233 include 9 sub-filters 2331, and the color filters Nine sub-filters 2341 are included in the sheet 234, and the minimum repeating unit 231 is 12 rows and 12 columns with 144 sub-filters, as shown in Figure 5, the arrangement is:
  • w represents a panchromatic sub-filter
  • a, b, and c all represent color sub-filters.
  • the advantage of quad is that it can locally combine pixels by 2 by 2 and binning by 3 by 3 to obtain images of different resolutions, and has a high signal-to-noise ratio.
  • the quad full-size output has high pixels, and a full-size full-resolution image is obtained with higher definition.
  • the advantage of RGBW is that it uses W pixels to increase the overall light intake of the image, thereby improving the signal-to-noise ratio of the image quality.
  • the microlens array 22 includes a plurality of minimum repeating units 221, the minimum repeating unit 221 includes a plurality of microlens groups 222, and the microlens group 222 includes a plurality of microlenses 2221 .
  • Each microlens group 222 corresponds to a panchromatic pixel group 243 or a color pixel group 244.
  • a plurality of microlenses 2211 are included in the microlens group 222. For example, five microlenses 2211 are included in the microlens group 222.
  • These five microlenses 2211 It includes four first microlenses 2211a and one second microlens 2211b; wherein, the first microlens 2211a corresponds to two pixels, and the second microlens 2211b corresponds to one pixel. And the 4 first microlenses include 2 first microlenses in the first direction and 2 first microlenses in the second direction; the first microlenses in the first direction are in the microlens group arranged along the first direction; the first microlenses in the second direction are arranged in the microlens group along the second direction. Wherein, the second microlens 2211b is located at the center of the microlens group 222, and the second microlens 2211b corresponds to the central pixel of the pixel group corresponding to the microlens group.
  • two first microlenses in the first direction are centrally symmetrically arranged relative to the second microlens 2211b along the first diagonal direction; two first microlenses in the second direction are arranged relative to the second microlens
  • the second microlenses 2211b are arranged symmetrically about the center along the second diagonal direction.
  • the second microlens 2211b is located at one of the four corners of the microlens group, and the second microlens 2211b corresponds to the microlens group Corresponding to the pixel at one of the four corners in the pixel group.
  • the second microlens 2211b is located at the lower right corner of the microlens group 222 .
  • the second microlens 2211b can also be positioned at the lower left corner of the microlens group 222; the second microlens 2211b can also be positioned at the upper left corner of the microlens group 222; This application does not limit this.
  • one microlens in the plurality of microlenses corresponds to at least four pixels. That is, one microlens in the plurality of microlenses 2211 corresponds to at least four pixels.
  • the microlens group 222 includes 4 microlenses 2211, and the 4 microlenses include 1 third microlens 2211c, 2 fourth microlenses 2211d and 1 The fifth microlens 2211e; wherein, the third microlens 2211c corresponds to 4 pixels, the fourth microlens 2211d corresponds to 2 pixels, and the fifth microlens 2211e corresponds to 1 pixel.
  • the fourth microlens 2211d here may be the same microlens as the first microlens 2211a; the fifth microlens 2211e here may be the same microlens as the second microlens 2211b, which is not limited in this application.
  • a focus control method is provided, which is applied to the image sensor in the above embodiment, the image sensor includes a pixel array and a filter array, and the method includes:
  • Operation 1020 determine a phase information output mode adapted to the light intensity of the current shooting scene; wherein, in different phase information output modes, the sizes of the output phase arrays are different.
  • the light intensity of the current shooting scene is not the same, and since the sensitivity of the RGB pixel array is different under different light intensities, under some light intensities, the RGB pixel array calculates The accuracy of the phase difference is low, which in turn leads to a significant decrease in the accuracy of focusing.
  • light intensity is also called light intensity.
  • Light intensity is a physical term, referring to the luminous flux of visible light received per unit area, referred to as illuminance, and the unit is Lux (Lux or lx).
  • Light Intensity is a quantity that indicates how strong or weak the light is and how much the surface area of an object is illuminated. The following table shows the light intensity values under different weather and locations:
  • the phase information output mode adapted to the light intensity of the scene, and then use different phase information output modes to output the phase information of the pixel array.
  • the phase information output mode refers to a mode of processing the original phase information based on the original phase information of the pixel array to generate the final output phase information of the pixel array.
  • the sizes of the output phase arrays are different. That is, under different light intensities of the current shooting scene, the sizes of the phase arrays output by the same pixel array are different.
  • the phase information corresponding to the same pixel array is directly output as the phase array corresponding to the pixel array or combined to a certain extent to generate the phase array corresponding to the pixel array. For example, if the light intensity of the current shooting scene is relatively high, the phase information corresponding to the same pixel array may be directly output as the phase array corresponding to the pixel array.
  • the size of the output phase array is equal to the size of the pixel array.
  • phase information corresponding to the same pixel array may be combined to a certain extent to generate a phase array corresponding to the pixel array.
  • the size of the output phase array is smaller than the size of the pixel array.
  • phase array corresponding to the pixel array according to the phase information output mode; wherein, the phase array includes phase information corresponding to the target pixel in the pixel array.
  • the phase information corresponding to the pixel array can be output according to the phase information output mode. Specifically, when outputting the phase information corresponding to the pixel array, it may be output in the form of a phase array.
  • the phase array includes phase information corresponding to the pixel array.
  • the phase information corresponding to the same pixel array is directly output as the phase array corresponding to the pixel array or combined to a certain extent , to generate a phase array corresponding to the pixel array, which is not limited in this application.
  • Operation 1060 calculate the phase difference of the pixel array based on the phase array, and perform focus control according to the phase difference.
  • the phase difference of the pixel array can be calculated based on the phase information in the phase array. Assuming that the phase array of the pixel array in the second direction can be obtained, the phase difference is calculated based on two adjacent phase information in the second direction, and finally the phase difference of the entire pixel array in the second direction is obtained. Assuming that the phase array of the pixel array in the first direction can be obtained, the phase difference is calculated based on two adjacent phase information in the first direction, and finally the phase difference of the entire pixel array in the first direction is obtained, and the second direction is the same as the first The direction is different.
  • the second direction may be the vertical direction of the pixel array
  • the first direction may be the horizontal direction of the pixel array
  • the second direction and the first direction are perpendicular to each other.
  • the phase difference of the entire pixel array in the second direction and the first direction can be obtained at the same time, and the phase difference of the pixel array in other directions can also be calculated, such as the diagonal direction (including the first diagonal direction, and the second The diagonal direction is perpendicular to the second diagonal direction), etc., which are not limited in this application.
  • the phase difference parallel to this direction is almost 0, obviously cannot be based on the collected A phase difference parallel to this direction is used for focusing. Therefore, if the preview image corresponding to the current shooting scene includes texture features in the first direction, the phase difference of the pixel array in the second direction is calculated based on the phase array of the pixel array in the second direction. Focusing control is performed according to the phase difference of the pixel array in the second direction.
  • the preview image includes the texture feature in the first direction, which means that the preview image includes horizontal stripes, which may be solid color stripes in the horizontal direction.
  • focus control is performed based on the phase difference in the vertical direction.
  • the preview image corresponding to the current shooting scene includes texture features in the second direction
  • focus control is performed based on the phase difference in the first direction. If the preview image corresponding to the current shooting scene includes texture features in the first diagonal direction, focus control is performed based on the phase difference in the second diagonal direction, and vice versa. In this way, for the texture features in different directions, the phase difference can be accurately collected, and then the focus can be accurately focused.
  • the phase information output mode adapted to the light intensity of the current shooting scene is determined; wherein, in different phase information output modes, the sizes of the output phase arrays are different.
  • a phase array corresponding to the pixel array is output; wherein, the phase array includes phase information corresponding to the pixel array.
  • the phase difference of the pixel array is calculated based on the phase array, and focus control is performed according to the phase difference.
  • phase information output modes can be adopted for the same pixel array, and phase arrays of different sizes can be output based on the original phase information. Since the signal-to-noise ratios of phase arrays with different sizes are different, the accuracy of the phase information output under different light intensities can be improved, thereby improving the accuracy of focus control.
  • operation 1040 is to output the phase array corresponding to the pixel array according to the phase information output mode, including:
  • Operation 1042 determine the target pixel group from the color pixel group and panchromatic pixel group in the pixel array, determine the target microlens corresponding to the target pixel group, and at least two corresponding to the target microlens pixel as the target pixel.
  • a phase information output mode adapted to the light intensity of the current shooting scene is determined. That is, the determined phase information output modes are different under different light intensities, and the sizes of the output phase arrays are also different under different phase information output modes.
  • the target pixel group is determined from the color pixel group and the panchromatic pixel group in the pixel array. Because it is based on the light intensity of the current shooting scene, the phase information output mode adapted to the light intensity of the current shooting scene is determined, and the sensitivity of color pixels and panchromatic pixels is different under different light intensities. Therefore, in different phases In the information output mode, the target pixel group can be determined from the color pixel group and the panchromatic pixel group in the pixel array.
  • all or part of the color pixel groups may be selected from the color pixel groups and panchromatic pixel groups in the pixel array as the target pixel group. It is also possible to select all or part of panchromatic pixel groups from the color pixel groups and panchromatic pixel groups in the pixel array as the target pixel group. It is also possible to select all or part of the color pixel groups and all or part of the panchromatic pixel groups from the color pixel groups and panchromatic pixel groups in the pixel array as the target pixel group, which is not limited in this application.
  • a target microlens corresponding to the target pixel group is determined, and at least two pixels corresponding to the target microlens are used as target pixels.
  • the target pixel group includes multiple microlenses, wherein some of the microlenses correspond to at least two pixels, and some of the microlenses correspond to one pixel. Therefore, it is necessary to determine the target microlens from the microlenses corresponding to at least two pixels, that is, all the microlenses corresponding to at least two pixels can be determined as target microlenses, or all the microlenses corresponding to at least two pixels can be determined as target microlenses.
  • the selected part of the lens is determined to be the target microlens, which is not limited in the present application.
  • At least two pixels corresponding to the target microlens are used as target pixels. That is, at least two pixels corresponding to the target microlens are used as the target pixels, or all the pixels corresponding to the target microlens are used as the target pixels, which is not limited in the present application.
  • Operation 1044 for each target pixel group, acquire phase information of the target pixel.
  • a phase array corresponding to the pixel array is generated according to the phase information of the target pixel.
  • phase information of the target pixel is acquired for each target pixel group.
  • the phase array corresponding to the pixel array can be generated directly according to the phase information of the target pixel.
  • the phase information of the target pixel may also be combined to a certain extent to generate a phase array corresponding to the pixel array, which is not limited in the present application.
  • the target pixel group can be determined from the color pixel group and the panchromatic pixel group in the pixel array under different phase information output modes. Therefore, the target pixel group can be determined based on at least one of the color pixel group and the panchromatic pixel group. Then the target pixel is determined from the target pixel group, and a phase array corresponding to the pixel array is generated according to the phase information of the target pixel. From the two dimensions of determining the target pixel group and determining the target pixel, a phase array corresponding to the pixel array is generated. Ultimately, the accuracy of the generated phased array is improved.
  • the target microlens corresponding to the target pixel group is determined, and at least two pixels corresponding to the target microlens are used as target pixels, including:
  • the target microlenses in the first direction correspond to at least 2 target pixels arranged along the first direction and adjacent to each other;
  • At least two pixels corresponding to the target microlenses in the first direction are used as target pixels.
  • the target microlens corresponding to the target pixel group is determined. Specifically, assuming that the panchromatic pixel group in the pixel array is determined as the target pixel group, the target microlens is determined from the microlenses corresponding to the panchromatic pixel group. Because the target microlens corresponds to at least two pixels, as shown in FIG.
  • the microlens corresponding to the panchromatic pixel group includes four first microlenses 2211a and one second microlens 2211b, wherein the first microlenses 2211a are respectively Corresponding to 2 pixels, the second microlens 2211b corresponds to 1 pixel.
  • the target microlens in the first direction is further determined from the four first microlenses 2211a.
  • the target microlenses in the first direction correspond to at least two target pixels arranged along the first direction and adjacent to each other.
  • the first direction is the horizontal direction, then in conjunction with FIG. 8, it can be known that the four first microlenses 2211a include the first microlens 2211a in the first direction (filled in gray in the figure) and the first microlens 2211a in the second direction ( Filled with oblique lines in the figure).
  • the target microlens in the first direction corresponding to the target pixel group is the first microlens 2211a in the first direction (filled in gray in the figure).
  • the first direction and the second direction here can be interchanged, which is not limited in this application.
  • At least two pixels corresponding to the first microlens 2211a in the first direction are used as target pixels. Then, for each target pixel group, the phase information of the target pixel in the second direction can be acquired. And according to the phase information of the target pixel, a phase array corresponding to the pixel array is generated.
  • the panchromatic pixel group includes four microlenses 2211, including one third microlens 2211c, two fourth microlenses 2211d and one fifth microlens 2211e; wherein, the third microlens 2211c Corresponding to 4 pixels respectively, the fourth microlens 2211d corresponds to 2 pixels, and the fifth microlens 2211e corresponds to 1 pixel.
  • the target microlens in the first direction is further determined from one third microlens 2211c and two fourth microlenses 2211d. Since the third microlens 2211c includes 4 pixels arranged in a 2 ⁇ 2 array, the third microlens 2211c can be regarded as a microlens belonging to the first direction or a microlens belonging to the second direction. Then, the target microlens in the first direction corresponding to the target pixel group is determined to be the first microlens 2211a (filled with gray in the figure) and the third microlens 2211c in the first direction.
  • At least two pixels corresponding to the first microlens 2211a (filled with gray in the figure) and at least four pixels corresponding to the third microlens 2211c in the first direction are used as target pixels. Then, for each target pixel group, the phase information of the target pixel in the second direction can be acquired. And according to the phase information of the target pixel, a phase array corresponding to the pixel array is generated.
  • the target microlens corresponding to the target pixel group is determined.
  • the microlenses in the first direction may be determined as the target microlenses from the microlenses corresponding to the target pixel group.
  • at least two pixels corresponding to the first microlens in the first direction may be used as target pixels.
  • phase information of the target pixel in the second direction is acquired.
  • a phase array corresponding to the pixel array is generated.
  • a phase array corresponding to the pixel array in the second direction can be obtained.
  • operation 1042 determining a target microlens corresponding to the target pixel group, using at least two pixels corresponding to the target microlens as target pixels, further includes:
  • the target microlens in the second direction corresponds to at least 2 target pixels arranged along the second direction and adjacent to each other; the second direction and the first direction are perpendicular to each other;
  • At least two pixels corresponding to the target microlenses in the second direction are used as target pixels.
  • the target microlens corresponding to the target pixel group is determined. Specifically, assuming that the panchromatic pixel group in the pixel array is determined as the target pixel group, the target microlens is determined from the microlenses corresponding to the panchromatic pixel group. Because the target microlens corresponds to at least two pixels, as shown in FIG.
  • the microlens corresponding to the panchromatic pixel group includes four first microlenses 2211a and one second microlens 2211b, wherein the first microlenses 2211a are respectively Corresponding to 2 pixels, the second microlens 2211b corresponds to 1 pixel.
  • the target microlens in the first direction is further determined from the four first microlenses 2211a.
  • the target microlenses in the first direction correspond to at least two target pixels arranged along the first direction and adjacent to each other.
  • the second direction and the first direction are perpendicular to each other, assuming that the first direction is a horizontal direction, then the second direction is a vertical direction.
  • the four first microlenses 2211a include the first microlens 2211a in the first direction (filled with gray in the figure) and the first microlens 2211a in the second direction (filled with oblique lines in the figure).
  • the target microlens in the second direction corresponding to the target pixel group is the first microlens 2211a in the second direction (filled with oblique lines in the figure).
  • the first direction and the second direction here can be interchanged, which is not limited in this application.
  • At least two pixels corresponding to the first microlens 2211a in the second direction are used as target pixels. Then, for each target pixel group, the phase information of the target pixel in the second direction can be acquired. And according to the phase information of the target pixel, a phase array corresponding to the pixel array is generated.
  • the panchromatic pixel group includes four microlenses 2211, including one third microlens 2211c, two fourth microlenses 2211d and one fifth microlens 2211e; wherein, the third microlens 2211c Corresponding to 4 pixels respectively, the fourth microlens 2211d corresponds to 2 pixels, and the fifth microlens 2211e corresponds to 1 pixel.
  • the target microlens in the second direction is further determined from one third microlens 2211c and two fourth microlenses 2211d. Since the third microlens 2211c includes 4 pixels arranged in a 2 ⁇ 2 array, the third microlens 2211c can be regarded as a microlens belonging to the first direction or a microlens belonging to the second direction. Then, the target microlens in the second direction corresponding to the target pixel group is determined to be the first microlens 2211a (filled with oblique lines in the figure) and the third microlens 2211c in the second direction.
  • At least two pixels corresponding to the first microlens 2211a (filled with oblique lines in the figure) and at least four pixels corresponding to the third microlens 2211c in the second direction are used as target pixels. Then, for each target pixel group, the phase information of the target pixel in the first direction can be acquired. And according to the phase information of the target pixel, a phase array corresponding to the pixel array is generated.
  • the target microlens corresponding to the target pixel group is determined.
  • the microlenses in the second direction may be determined as the target microlenses from the microlenses corresponding to the target pixel group.
  • at least two pixels corresponding to the first microlens in the second direction may be used as target pixels.
  • the phase information of the target pixel in the first direction is acquired.
  • a phase array corresponding to the pixel array is generated.
  • the phase array corresponding to the pixel array in the first direction can be obtained.
  • the target microlens corresponding to the target pixel group is determined, and at least two pixels corresponding to the target microlens are used as target pixels, including:
  • the target microlenses in the first direction and the target microlenses in the second direction corresponding to the target pixel group;
  • the target microlenses in the first direction correspond to at least 2 target pixels arranged along the first direction and adjacent to each other;
  • the second direction The target microlens corresponds to at least 2 target pixels arranged along a second direction and adjacent to each other;
  • the second direction and the first direction are perpendicular to each other;
  • At least two pixels corresponding to the target microlens in the first direction and at least two pixels corresponding to the target microlens in the second direction are used as target pixels.
  • the target microlens corresponding to the target pixel group is determined. Specifically, assuming that the panchromatic pixel group in the pixel array is determined as the target pixel group, the target microlens is determined from the microlenses corresponding to the panchromatic pixel group. Because the target microlens corresponds to at least two pixels, as shown in FIG.
  • the microlens corresponding to the panchromatic pixel group includes four first microlenses 2211a and one second microlens 2211b, wherein the first microlenses 2211a are respectively Corresponding to 2 pixels, the second microlens 2211b corresponds to 1 pixel.
  • the target microlenses in the first direction and the target microlenses in the second direction are further determined from the four first microlenses 2211a.
  • the target microlenses in the first direction correspond to at least 2 target pixels arranged along the first direction and adjacent to each other
  • the target microlenses in the first direction correspond to at least 2 target pixels arranged along the first direction and adjacent to each other.
  • the first direction is the horizontal direction
  • the four first microlenses 2211a include the first microlens 2211a in the first direction (filled in gray in the figure) and the first microlens 2211a in the second direction ( Filled with oblique lines in the figure).
  • the target microlens in the first direction corresponding to the target pixel group is the first microlens 2211a in the first direction (filled with gray in the figure), and it is determined that the target microlens in the second direction corresponding to the target pixel group is the second microlens.
  • direction of the first microlens 2211a (indicated by oblique lines in the figure).
  • the first direction and the second direction here can be interchanged, which is not limited in this application.
  • At least two pixels corresponding to the first microlens 2211a in the first direction (filled with gray in the figure) and the first microlens 2211a in the second direction (filled with oblique lines in the figure) are used as target pixels. Then, for each target pixel group, the phase information of the target pixel in the first direction and the phase information of the second direction can be acquired. And according to the phase information of the target pixel, a phase array corresponding to the pixel array is generated.
  • the panchromatic pixel group includes four microlenses 2211, including one third microlens 2211c, two fourth microlenses 2211d and one fifth microlens 2211e; wherein, the third microlens 2211c Corresponding to 4 pixels respectively, the fourth microlens 2211d corresponds to 2 pixels, and the fifth microlens 2211e corresponds to 1 pixel.
  • the target microlens in the first direction and the target microlens in the second direction are further determined from one third microlens 2211c and two fourth microlenses 2211d. Since the third microlens 2211c includes 4 pixels arranged in a 2 ⁇ 2 array, the third microlens 2211c can be regarded as a microlens belonging to the first direction or a microlens belonging to the second direction.
  • the target microlens in the first direction corresponding to the target pixel group and the target microlens in the second direction be the first microlens 2211a in the first direction (filled with gray in the figure), and the first microlens in the second direction 2211a (filled with oblique lines in the figure) and the third microlens 2211c.
  • At least two pixels corresponding to the first microlens 2211a in the first direction (filled with gray in the figure), the first microlens 2211a in the second direction (filled with oblique lines in the figure), and at least two pixels corresponding to the third microlens 2211c Four pixels are used as target pixels. Then, for each target pixel group, the phase information of the target pixel in the first direction and the second direction can be acquired. And according to the phase information of the target pixel, a phase array corresponding to the pixel array is generated.
  • the target microlens corresponding to the target pixel group is determined.
  • the microlenses in the first direction may be determined as the target microlenses from the microlenses corresponding to the target pixel group.
  • at least two pixels corresponding to the first microlens in the first direction may be used as target pixels.
  • phase information of the target pixel in the second direction is acquired.
  • At least two pixels corresponding to the first microlens in the second direction may also be used as target pixels.
  • the phase information of the target pixel in the first direction is acquired.
  • phase array corresponding to the pixel array is generated.
  • a phase array corresponding to the pixel array in two directions, the first direction and the second direction can be obtained. Therefore, it is possible to calculate phase differences in two directions, and then achieve focusing in two directions.
  • the specific implementation operation of determining the phase information output mode adapted to the light intensity of the current shooting scene includes:
  • Operation 1022 determine the target light intensity range to which the light intensity of the current shooting scene belongs; wherein, different light intensity ranges correspond to different phase information output modes;
  • Operation 1024 determine a phase information output mode adapted to the light intensity of the current shooting scene.
  • the light intensities may be divided into different light intensity ranges in order of magnitude.
  • the preset threshold of the light intensity can be determined according to the exposure parameters and the size of the pixels in the pixel array.
  • the exposure parameters include shutter speed, lens aperture size and sensitivity (ISO, light sensitivity ordinance).
  • phase information output modes for different light intensity ranges. Specifically, according to the order of the light intensity in the light intensity range, the size of the phase array output by the phase information output mode set for different light intensity ranges decreases successively.
  • the light intensity range is used as the target light intensity range to which the light intensity of the current shooting scene belongs.
  • the phase information output mode corresponding to the target light intensity range is used as the phase information output mode adapted to the light intensity of the current shooting scene.
  • phase information output mode adapted to the light intensity of the current shooting scene when determining the phase information output mode adapted to the light intensity of the current shooting scene according to the light intensity of the current shooting scene, since different light intensity ranges correspond to different phase information output modes, first determine The target light intensity range to which the light intensity of the current shooting scene belongs. Then, according to the target light intensity range, determine the phase information output mode adapted to the light intensity of the current shooting scene. Different phase information output modes are set in advance for different light intensity ranges, and the sizes of the phase arrays output in each phase information output mode are different. Therefore, based on the light intensity of the current shooting scene, the phase information of the pixel array can be calculated more finely, so as to achieve more accurate focusing.
  • the phase information output mode includes the full-size output mode and the first-size output mode, and the size of the phase array in the full-size output mode is larger than that in the first-size output mode.
  • Operation 1024a if the light intensity of the current shooting scene is greater than the first preset threshold, determine that the phase information output mode adapted to the light intensity of the current shooting scene is the full-size output mode;
  • the phase information output mode corresponding to the light intensity range is a full-scale output mode. Then, if it is determined that the light intensity of the current shooting scene is greater than the first preset threshold, the light intensity of the current shooting scene falls within the light intensity range. That is, it is determined that the phase information output mode adapted to the light intensity of the current shooting scene is the full-size output mode.
  • the full-scale output mode is used to output the phase array, that is, to output all the original phase information of the pixel array to generate the phase array of the pixel array.
  • the phase information output mode corresponding to the light intensity range is the first size output mode. If it is determined that the light intensity of the current shooting scene is greater than the second preset threshold and less than or equal to the first preset threshold, then the light intensity of the current shooting scene falls within the light intensity range. That is, it is determined that the phase information output mode adapted to the light intensity of the current shooting scene is the first size output mode. Wherein, outputting the phase array in the first size output mode is to combine and output the original phase information of the pixel array to generate the phase array of the pixel array.
  • the size of the phase array in the full-size output mode is greater than the size of the phase array in the first-size output mode
  • the light intensity of the current shooting scene is greater than the first preset threshold
  • the phase information output mode adapted to the light intensity is the full-scale output mode. If the light intensity of the current shooting scene is greater than the second preset threshold and less than or equal to the first preset threshold, it is determined that the phase information output mode adapted to the light intensity of the current shooting scene is the first size output mode.
  • the full-size output mode will be used to output the phase array with the same size as the pixel array
  • the first-size output mode will be used to output the phase array with the same size as the pixel array. Smaller phased arrays. That is, when the light intensity of the current shooting scene is second, the signal-to-noise ratio of the phase information is improved by reducing the phase array.
  • the pixel array can be an RGBW pixel array, including a plurality of minimum repeating units 241, the minimum repeating unit 241 includes a plurality of pixel groups 242, and the plurality of pixel groups 242 includes panchromatic pixel groups 243 and color pixels Group 244.
  • Each panchromatic pixel group 243 includes 9 panchromatic pixels 2431
  • each color pixel group 244 includes 9 color pixels 2441 .
  • Each panchromatic pixel 2431 includes 2 sub-pixels arranged in an array
  • each color pixel 2441 includes 2 sub-pixels arranged in an array.
  • the target pixel group is selected from the color pixel group and the panchromatic pixel group in the pixel array, including :
  • the color pixel group in the pixel array is used as the target pixel group; or the color pixel group and the panchromatic pixel group in the pixel array are used as the target pixel group; wherein, one of the target pixel groups corresponds to one pixel group;
  • phase information of the target pixel According to the phase information of the target pixel, generate a phase array corresponding to the pixel array, including:
  • phase information of the target pixel a full-size phase array corresponding to the pixel array is generated.
  • This embodiment is a specific implementation operation of outputting the phase array corresponding to the pixel array according to the full-size output mode when the light intensity of the current shooting scene is greater than the first preset threshold.
  • the first preset threshold may be 2000 lux, which is not limited in this application. That is, it is in an environment where the light intensity is greater than 2000lux.
  • the color pixel group is determined from the pixel array as the target pixel group for calculating the phase information. Because when the light intensity of the current shooting scene is greater than the first preset threshold, that is, in a scene with sufficient light, due to the high sensitivity of panchromatic pixels, it is easy to be saturated in a scene with sufficient light, and the correct image will not be obtained after saturation. phase information, so the phase information of the color pixel group can be used to realize phase focusing (PDAF) at this time.
  • PDAF phase focusing
  • the color pixel group and the panchromatic pixel group in the pixel array can also be used as the target pixel group.
  • phase information of color pixel groups can be used to achieve phase focusing, for example, red pixel groups, green pixel groups, and blue pixel groups are used.
  • the phase focusing is realized by using the phase information of at least one pixel group in the color pixel group. It is also possible to use some pixels in a part of color pixel groups to implement phase focusing, for example, use phase information of some pixels in a red pixel group to implement phase focusing, which is not limited in this application. Since only the phase information of the color pixel group is used for phase focusing at this time, the data volume of the output phase information is reduced, thereby improving the efficiency of phase focusing.
  • the target microlens is determined from the microlenses corresponding to the target pixel group, and at least two pixels corresponding to the target microlens are used as target pixels.
  • the target microlenses in the first direction corresponding to the target pixel group may be determined; the target microlenses in the first direction correspond to at least two adjacent target pixels arranged along the first direction. At least two pixels corresponding to the target microlenses in the first direction are used as target pixels. It is also possible to determine the target microlenses in the second direction corresponding to the target pixel group; the target microlenses in the second direction correspond to at least 2 target pixels arranged along the second direction and adjacent to each other; the second direction and the first direction are perpendicular to each other . At least two pixels corresponding to the target microlenses in the second direction are used as target pixels.
  • the target microlenses in the first direction and the target microlenses in the second direction corresponding to the target pixel group; the target microlenses in the first direction correspond to at least 2 target pixels arranged along the first direction and adjacent to each other; the second The target microlenses in the two directions correspond to at least two target pixels arranged along the second direction and adjacent to each other. At least two pixels corresponding to the target microlens in the first direction and at least two pixels corresponding to the target microlens in the second direction are used as target pixels.
  • each target pixel group the phase information of each target pixel in the target pixel group is acquired.
  • each pixel corresponds to a photosensitive element.
  • a 12 ⁇ 12 pixel array may include 2 red pixel groups, 4 green pixel groups, 2 blue pixel groups, and 8 panchromatic pixel groups.
  • a represents green
  • b represents red
  • c represents blue
  • w represents full color.
  • the phase information of the red pixel group is calculated for the red pixel group 244.
  • the red pixel group includes nine red pixels arranged in a 3 ⁇ 3 array, numbered sequentially as red pixel 1, red pixel 2, red pixel 3, red pixel 4. Red pixel 5, red pixel 6, red pixel 7, red pixel 8, red pixel 9.
  • each pixel corresponds to a photosensitive element. That is, the phase information of red pixel 1 is L1, the phase information of red pixel 2 is R1; the phase information of red pixel 8 is L2, and the phase information of red pixel 9 is L2; the phase information of red pixel 3 is U1, and the phase information of red pixel 6 is The phase information is D1; the phase information of the red pixel 4 is U1, and the phase information of the red pixel 7 is D1.
  • the pixel 5 at the very center of the red pixel group corresponds to a microlens, therefore, no phase information is obtained.
  • the target microlenses in the first direction corresponding to the target pixel group are determined, at least two pixels corresponding to the target microlenses in the first direction are used as target pixels. Assuming that the first direction is the horizontal direction, L1, R1, L2, and R2 are sequentially output to obtain the phase information of the red pixel group.
  • the above-mentioned processing is performed on the other 1 red pixel group, 4 green pixel groups and 2 blue pixel groups included in the pixel array in order to obtain the phase information L3, R3, L4, R4, L5, R5, L6, R6, L7, R7, L8, R8, L9, R9, L10, R10, L11, R11, L12, R12, L13, R13, L14, R14, L15, R15, L16, R16.
  • the size of the full-size phase array is equivalent to the size of 4 ⁇ 8 pixels arranged in the array.
  • the pixel size refers to the area size of a pixel, and the area size is related to the length and width of the pixel.
  • a pixel is the smallest photosensitive unit on a photosensitive device (CCD or CMOS) of a digital camera.
  • CCD is the abbreviation of charge coupled device (charge coupled device), CMOS (Complementary Metal-Oxide-Semiconductor), which can be interpreted as complementary metal oxide semiconductor.
  • CMOS Complementary Metal-Oxide-Semiconductor
  • a pixel has no fixed size, and the size of a pixel is related to the size and resolution of the display screen.
  • the size of the 4 ⁇ 8 pixels arranged in the array is: 4 ⁇ 0.0778 mm in length and 8 ⁇ 0.0778 mm in width.
  • the size of the full-scale phased array is 4 ⁇ 0.0778 mm in length and 8 ⁇ 0.0778 mm in width.
  • the pixels may not be rectangles with equal length and width, and the pixels may also have other heterogeneous structures, which are not limited in this application.
  • the target microlenses in the second direction corresponding to the target pixel group are determined, and at least two pixels corresponding to the target microlenses in the second direction are used as target pixels. Assuming that the second direction is the vertical direction, output U1, D1, U2, and D2 in sequence to obtain the phase information of the red pixel group.
  • the above processing is performed sequentially on the other 1 red pixel group, 4 green pixel groups and 2 blue pixel groups included in the pixel array to obtain the phase information U3, D3, U4, D4, U5, D5, U6, D6, U7, D7, U8, D8, U9, D9, U10, D10, U11, D11, U12, D12, U13, D13, U14, D14, U15, D15, U16, D16.
  • phase information of all target pixels as a full-scale phase array corresponding to the pixel array. That is, U1, D1, U2, D2, U3, D3, U4, D4, U5, D5, U6, D6, U7, D7, U8, D8, U9, D9, U10, D10, U11, D11, U12, D12, U13, D13, U14, D14, U15, D15, U16, and D16 are arranged in sequence to form a full-scale phased array. And the size of the full-size phase array is equivalent to the size of 4 ⁇ 8 pixels arranged in the array.
  • the target microlens in the first direction and the target microlens in the second direction corresponding to the target pixel group are determined, and at least two corresponding to the target microlens in the first direction and the target microlens in the second direction pixel as the target pixel.
  • the first direction is the horizontal direction and the second direction is the vertical direction
  • L1, R1, L2, R2, U1, D1, U2, and D2 in sequence to obtain the phase information of the red pixel group.
  • the above processing is performed sequentially on the other 1 red pixel group, 4 green pixel groups and 2 blue pixel groups included in the pixel array to obtain the phase information of the pixel array.
  • output the phase information of all target pixels as a full-scale phase array corresponding to the pixel array.
  • the size of the full-size phase array is equivalent to the size of 8 ⁇ 8 pixels arranged in the array.
  • the phase array can be input into the ISP (Image Signal Processing), and the phase difference of the pixel array can be calculated based on the phase array through the ISP. Then, calculate the defocus distance based on the phase difference, and calculate the DAC code value corresponding to the defocus distance. Finally, the code value is converted into a driving current by the driver IC of the motor (VCM), and the motor drives the lens to move to the clear position.
  • ISP Image Signal Processing
  • the phase array corresponding to the pixel array is output according to the full-size output mode.
  • the color pixel groups in the pixel array are used as target pixel groups, and for each target pixel group, the phase information of each pixel in the target pixel group is acquired.
  • the phase information of the target pixel can be obtained from at least one microlens of the microlenses in the first direction and the microlenses in the second direction, therefore, various kinds of phase information can be obtained.
  • a full-scale phase array corresponding to the pixel array is generated. Since only the phase information of the color pixel group is used for phase focusing at this time, the data volume of the output phase information is reduced, thereby improving the efficiency of phase focusing.
  • the phase information output mode is the first size output mode
  • the color pixel group and the panchromatic pixel group in the pixel array are used as the target pixel group, including:
  • At least one of a color pixel group and a panchromatic pixel group in the pixel array is used as a target pixel group; the target pixel group corresponds to a pixel group;
  • phase information of the target pixel According to the phase information of the target pixel, generate a phase array corresponding to the pixel array, including:
  • a first-size phase array corresponding to the pixel array is generated according to multiple sets of intermediate combined phase information.
  • This embodiment is to output the phase array corresponding to the pixel array according to the first size output mode when the light intensity of the current shooting scene is greater than the second preset threshold and less than or equal to the first preset threshold.
  • the second preset threshold may be 500 lux, which is not limited in this application. That is, it is in an environment where the light intensity is greater than 500lux and less than or equal to 2000lux.
  • FIG. 15 firstly, at least one of a color pixel group and a panchromatic pixel group is determined from the pixel array as a target pixel group for calculating phase information.
  • the panchromatic pixels are not easy to be saturated in a scene with slightly weak light , so the phase information of the panchromatic pixel group can be used at this time to realize phase focusing (PDAF).
  • the color pixels can also obtain accurate phase information in the scene with weak light, so at this time, the phase information of the color pixel group can also be used to realize phase focusing (PDAF).
  • the color pixel group when outputting the phase array corresponding to the pixel array according to the first size output mode, the color pixel group can be selected as the target pixel group, the panchromatic pixel group can also be selected as the target pixel group, and the color pixel group and the full color pixel group can also be selected.
  • the color pixel group is used as the target pixel group, which is not limited in this application.
  • the phase focusing can be realized by using the phase information of a part of the color pixel groups in the pixel array, or using part of the color pixels in the part of the color pixel groups to realize the phase focusing, This application does not limit this.
  • the phase information of some panchromatic pixel groups in the pixel array can be used to achieve phase focusing, or part of the panchromatic pixel groups in the partial panchromatic pixel group can be used to achieve phase focusing.
  • Phase focusing is implemented, which is not limited in this application.
  • the phase information of a part of the panchromatic pixel group and a part of the color pixel group in the pixel array can be used to achieve phase focusing, or a part of the panchromatic pixel group can be used.
  • Part of the full-color pixels in the pixel group and part of the color pixels in the part of the color pixel group implement phase focusing, which is not limited in this application.
  • phase information of some pixel groups can be used for phase focusing, or only the phase information of some pixels in some pixel groups can be used for phase focusing, so the data volume of the output phase information is reduced, thereby improving improve the efficiency of phase focusing.
  • the target microlens from the microlenses corresponding to the target pixel group, and use at least two pixels corresponding to the target microlens as target pixels.
  • the target microlenses in the first direction corresponding to the target pixel group may be determined; the target microlenses in the first direction correspond to at least two adjacent target pixels arranged along the first direction. At least two pixels corresponding to the target microlenses in the first direction are used as target pixels. It is also possible to determine the target microlenses in the second direction corresponding to the target pixel group; the target microlenses in the second direction correspond to at least 2 target pixels arranged along the second direction and adjacent to each other; the second direction and the first direction are perpendicular to each other . At least two pixels corresponding to the target microlenses in the second direction are used as target pixels.
  • the target microlenses in the first direction and the target microlenses in the second direction corresponding to the target pixel group; the target microlenses in the first direction correspond to at least 2 target pixels arranged along the first direction and adjacent to each other; the second The target microlenses in the two directions correspond to at least two target pixels arranged along the second direction and adjacent to each other. At least two pixels corresponding to the target microlens in the first direction and at least two pixels corresponding to the target microlens in the second direction are used as target pixels.
  • each target pixel group the phase information of each target pixel in the target pixel group is acquired.
  • each pixel corresponds to a photosensitive element.
  • phase information of each target pixel in the panchromatic pixel group is obtained for each panchromatic pixel group.
  • the phase information of the target pixel is combined to generate multiple sets of intermediate combined phase information; finally, according to the multiple sets of intermediate combined phase information, a first-size phase array corresponding to the pixel array is generated.
  • a pixel array may include 2 red pixel groups, 4 green pixel groups, 2 blue pixel groups and 8 panchromatic pixel groups. Assuming that all panchromatic pixel groups in the pixel array are used as target pixel groups, then for the 8 panchromatic pixel groups included in the pixel array, the phase information of each pixel group is sequentially calculated. For example, the phase information of the panchromatic pixel group is calculated for the panchromatic pixel group.
  • the panchromatic pixel group includes 9 panchromatic pixels arranged in a 3 ⁇ 3 array, which are sequentially numbered as panchromatic pixel 1, panchromatic pixel 2, panchromatic pixel Color pixel 3, panchromatic pixel 4, panchromatic pixel 5, panchromatic pixel 6, panchromatic pixel 7, panchromatic pixel 8, panchromatic pixel 9. Wherein, each pixel corresponds to a photosensitive element.
  • the phase information of panchromatic pixel 1 is L1, the phase information of panchromatic pixel 2 is R1; the phase information of panchromatic pixel 8 is L2, and the phase information of panchromatic pixel 9 is L2; the phase information of panchromatic pixel 3 is U1 , the phase information of the panchromatic pixel 6 is D1; the phase information of the panchromatic pixel 4 is U1, and the phase information of the panchromatic pixel 7 is D1.
  • the pixel 5 at the center of the panchromatic pixel group corresponds to a microlens, therefore, no phase information is obtained.
  • the target microlenses in the first direction corresponding to the target pixel group are determined, at least two pixels corresponding to the target microlenses in the first direction are used as target pixels.
  • the first direction is the horizontal direction
  • L1, R1, L2, and R2 are sequentially output, and then the phase information of the target pixel is combined to generate multiple sets of intermediate combined phase information.
  • L1 and L2 are combined to generate first intermediate combined phase information L; R1 and R2 are combined to generate first intermediate combined phase information R.
  • the above-mentioned processing is performed sequentially on the other panchromatic pixel groups included in the pixel array to obtain the phase information L3, R3, L4, R4, L5, R5, L6, R6, L7, R7, L8, R8 of the pixel array , L9, R9, L10, R10, L11, R11, L12, R12, L13, R13, L14, R14, L15, R15, L16, R16.
  • L3 and L4 are combined to generate first intermediate combined phase information L;
  • R3 and R4 are combined to generate first intermediate combined phase information R.
  • a first-size phase array corresponding to the pixel array is generated according to multiple sets of intermediate combined phase information.
  • eight sets of first intermediate combined phase information L and first intermediate combined phase information R are sequentially arranged to generate a phase array of the first size.
  • the size of the phase array of the first size is equivalent to the size of 4 ⁇ 4 pixels arranged in the array.
  • the conversion processing may be processing such as correcting the 8 sets of first intermediate combining phase information L and first intermediate combining phase information R, which is not limited in this application.
  • the phase array corresponding to the size of 4 ⁇ 4 pixels can be combined into a phase array with the size of 4 ⁇ 2 pixels.
  • the present application does not limit the specific size of the combined phase array.
  • the pixel size refers to the area size of a pixel, which is related to the length and width of the pixel.
  • a pixel is the smallest photosensitive unit on a photosensitive device (CCD or CMOS) of a digital camera.
  • the size of the 4 ⁇ 4 pixels arranged in the array is: 4 ⁇ 0.0778 mm in length and 4 ⁇ 0.0778 mm in width.
  • the size of the full-scale phased array is 4 ⁇ 0.0778 mm in length and 4 ⁇ 0.0778 mm in width.
  • the pixels may not be rectangles with equal length and width, and the pixels may also have other heterogeneous structures, which are not limited in this application.
  • the target microlenses in the second direction corresponding to the target pixel group are determined, and at least two pixels corresponding to the target microlenses in the second direction are used as target pixels.
  • the second direction is the vertical direction
  • output U1, D1, U2, and D2 in sequence, and then combine the phase information of the target pixel to generate multiple sets of intermediate combined phase information.
  • U1 and U2 are combined to generate first intermediate combined phase information U; D1 and D2 are combined to generate first intermediate combined phase information D.
  • the above-mentioned processing is performed sequentially on the other panchromatic pixel groups included in the pixel array to obtain the phase information U3, D3, U4, D4, U5, D5, U6, D6, U7, D7, U8, D8 of the pixel array , U9, D9, U10, D10, U11, D11, U12, D12, U13, D13, U14, D14, U15, D15, U16, D16.
  • U3 and U4 are combined to generate first intermediate combined phase information U;
  • D3 and D4 are combined to generate first intermediate combined phase information D.
  • a first-size phase array corresponding to the pixel array is generated according to multiple sets of intermediate combined phase information.
  • eight sets of first intermediate combined phase information U and first intermediate combined phase information D are sequentially arranged to generate a phase array of the first size.
  • the size of the phase array of the first size is equivalent to the size of 4 ⁇ 4 pixels arranged in the array.
  • the target microlens in the first direction and the target microlens in the second direction corresponding to the target pixel group are determined, and at least two corresponding to the target microlens in the first direction and the target microlens in the second direction pixel as the target pixel.
  • the first direction is the horizontal direction and the second direction is the vertical direction
  • output L1, R1, L2, R2, U1, D1, U2, and D2 in sequence and then combine the phase information of the target pixel to generate multiple sets of intermediate Merge phase information.
  • L1 and L2 are combined to generate first intermediate combined phase information L;
  • R1 and R2 are combined to generate first intermediate combined phase information R.
  • the above processing is performed sequentially on other panchromatic pixel groups included in the pixel array to obtain the phase information of the pixel array.
  • the phase information of the target pixel is combined to generate multiple sets of intermediate combined phase information, and the multiple sets of intermediate combined phase information are used as a first-size phase array corresponding to the pixel array.
  • the size of the first full-size phase array is equivalent to the size of 8 ⁇ 8 pixels arranged in the array.
  • phase information of each target pixel in each color pixel group is acquired for each color pixel group.
  • the phase information of the target pixel is combined to generate multiple sets of intermediate combined phase information; finally, according to the multiple sets of intermediate combined phase information, a first-size phase array corresponding to the pixel array is generated.
  • the specific calculation process is the same as the process of calculating the phase array of the first size when all the panchromatic pixel groups in the pixel array are used as target pixel groups, and will not be repeated here.
  • panchromatic pixel groups and color pixel groups in the pixel array are used as target pixel groups
  • obtain the panchromatic pixel group and each color pixel group Phase information for each target pixel.
  • the phase information of the target pixel is combined to generate multiple sets of intermediate combined phase information; finally, according to the multiple sets of intermediate combined phase information, a first-size phase array corresponding to the pixel array is generated.
  • the specific calculation process is the same as the process of calculating the phase array of the first size when all the panchromatic pixel groups in the pixel array are used as the target pixel group, and will not be repeated here.
  • the phase array can be input into the ISP, and the phase difference of the pixel array can be calculated by the ISP based on the phase array. Then, calculate the defocus distance based on the phase difference, and calculate the DAC code value corresponding to the defocus distance. Finally, the code value is converted into a driving current by the driver IC of the motor (VCM), and the motor drives the lens to move to the clear position.
  • VCM driver IC of the motor
  • the color pixel group or panchromatic when the light intensity of the current shooting scene is greater than the second preset threshold and less than or equal to the first preset threshold, because the light intensity at this time is slightly weaker, the color pixel group or panchromatic
  • the phase information collected by the pixel groups is not very accurate, and some color pixel groups or part of the panchromatic pixel groups may not collect phase information. Therefore, at least one of a color pixel group or a panchromatic pixel group in the pixel array is used as a target pixel group, and for each target pixel group, phase information of each pixel in the target pixel group is acquired. And the phase information of the target pixel can be obtained from at least one microlens of the microlenses in the first direction and the microlenses in the second direction, therefore, various kinds of phase information can be obtained.
  • the phase information of the target pixel is combined to generate multiple sets of intermediate combined phase information.
  • a first-size phase array corresponding to the pixel array is generated according to multiple sets of intermediate combined phase information.
  • the phase information of the target pixel is combined to a certain extent by adopting the first size output mode, so as to improve the accuracy of the output phase information and improve the signal-to-noise ratio of the phase information.
  • focusing accuracy can be improved by performing phase focusing based on the phase array of the first size corresponding to the pixel array.
  • the phase information of the target pixel is combined to generate multiple sets of intermediate combined phase information, including:
  • the target pixels at the same position are corresponding to the target microlens in the same orientation in the first direction;
  • phase information of the target pixel at the same position is combined to generate multiple sets of first combined phase information.
  • the target microlenses in the first direction corresponding to the target pixel group are determined, and at least two pixels corresponding to the target microlenses in the first direction are used as target pixels. Assuming that the first direction is the horizontal direction, L1, R1, L2, and R2 are sequentially output, and then the phase information of the target pixel is combined to generate multiple sets of intermediate combined phase information.
  • the target pixels at the same position are at the same orientation in the first direction in the corresponding target microlens. For example, from the pixel 1, pixel 2, pixel 8, and pixel 9 corresponding to the target microlens 2211a (filled in gray in the figure) in the first direction, the target pixel at the same position is determined.
  • L1 and L2 are combined to generate first intermediate combined phase information L; R1 and R2 are combined to generate first intermediate combined phase information R.
  • Multiple sets of first combined phase information are generated based on multiple sets of first intermediate combined phase information L and first intermediate combined phase information R.
  • At least two target microlenses in the first direction in the target pixel group are obtained, and target pixels at the same position are determined from pixels corresponding to the at least two target microlenses in the first direction.
  • the phase information of the target pixel at the same position is combined to generate multiple sets of first combined phase information.
  • the first size output mode is adopted to combine the phase information of the pixels to a certain extent, so as to improve the accuracy of the output phase information and improve the signal-to-noise ratio of the phase information.
  • focusing accuracy can be improved by performing phase focusing based on the phase array of the first size corresponding to the pixel array.
  • the first direction is the vertical direction
  • the vertical direction is named the second direction here
  • the target pixel is at least two pixels corresponding to the target microlens in the second direction
  • the target pixel group The phase information of the target pixel within is combined to generate multiple sets of intermediate combined phase information, including:
  • the target pixels at the same position are in the corresponding the same orientation in the second direction in the target microlens;
  • the target microlenses in the second direction corresponding to the target pixel group are determined, and at least two pixels corresponding to the target microlenses in the second direction are used as target pixels. Assuming that the second direction is the vertical direction, output U1, D1, U2, and D2 in sequence, and then combine the phase information of the target pixel to generate multiple sets of intermediate combined phase information.
  • At least two target microlenses in the second direction in the target pixel group are obtained, and the target microlenses in the at least two second directions correspond to Determine the target pixel at the same position among the pixels of .
  • the target pixels at the same position are at the same orientation in the second direction in the corresponding target microlens. For example, from the pixel 3, pixel 6, pixel 4, and pixel 7 corresponding to the target microlens 2211a in the second direction (filled with oblique lines in the figure), the target pixel at the same position is determined.
  • At least two target microlenses in the second direction in the target pixel group are acquired, and target pixels at the same position are determined from pixels corresponding to the at least two target microlenses in the second direction.
  • the first size output mode is adopted to combine the phase information of the pixels to a certain extent, so as to improve the accuracy of the output phase information and improve the signal-to-noise ratio of the phase information.
  • focusing accuracy can be improved by performing phase focusing based on the phase array of the first size corresponding to the pixel array.
  • the target pixel in the target pixel group is combined to generate multiple sets of intermediate combined phase information, including:
  • the target pixels at the same position are corresponding to the target microlens in the same orientation in the first direction;
  • multiple sets of intermediate combined phase information are generated.
  • the target microlenses in the first direction corresponding to the target pixel group are determined, and at least two pixels corresponding to the target microlenses in the first direction are used as target pixels. Assuming that the first direction is the horizontal direction, L1, R1, L2, and R2 are sequentially output, and then the phase information of the target pixel is combined to generate multiple sets of intermediate combined phase information.
  • the target pixels at the same position are in the same orientation in the first direction in the corresponding target microlens. For example, from the pixel 1, pixel 2, pixel 8, and pixel 9 corresponding to the target microlens 2211a (filled in gray in the figure) in the first direction, the target pixel at the same position is determined.
  • L1 and L2 are combined to generate first intermediate combined phase information L; R1 and R2 are combined to generate first intermediate combined phase information R.
  • Multiple sets of first combined phase information are generated based on multiple sets of first intermediate combined phase information L and first intermediate combined phase information R.
  • the target microlenses in the second direction corresponding to the target pixel group are determined, and at least two pixels corresponding to the target microlenses in the second direction are used as target pixels. Assuming that the second direction is the vertical direction, output U1, D1, U2, and D2 in sequence, and then combine the phase information of the target pixel to generate multiple sets of intermediate combined phase information.
  • the target pixels at the same position are in the same orientation in the second direction in the corresponding target microlens. For example, from the pixel 3, pixel 6, pixel 4, and pixel 7 corresponding to the target microlens 2211a in the second direction (filled with oblique lines in the figure), the target pixel at the same position is determined.
  • multiple sets of intermediate combined phase information are generated based on multiple sets of first combined phase information and multiple sets of second combined phase information.
  • At least two target microlenses in the first direction and at least two target microlenses in the second direction in the target pixel group are obtained, and the target pixels at the same position are determined from the pixels corresponding to the target microlenses .
  • the phase information of the target pixel at the same position is combined to generate multiple sets of first combined phase information and second combined phase information.
  • multiple sets of intermediate combined phase information are generated based on multiple sets of first combined phase information and multiple sets of second combined phase information.
  • the first size output mode is adopted to combine the phase information of the pixels to a certain extent, so as to improve the accuracy of the output phase information and improve the signal-to-noise ratio of the phase information.
  • focusing accuracy can be improved by performing phase focusing based on the phase array of the first size corresponding to the pixel array.
  • the target pixel group includes a color pixel group and a panchromatic pixel group
  • the target phase information generate a first-size phase array of the pixel array in the first direction, including:
  • a first-sized phase array of the pixel array in the first direction is generated.
  • the determined target pixel group includes a color pixel group and a panchromatic pixel group
  • weights between different pixel groups may be considered.
  • the first phase weight corresponding to the color pixel group and the second phase weight corresponding to the panchromatic pixel group may be determined according to the light intensity of the current shooting scene.
  • the color pixel groups correspond to different first phase weights under different light intensities
  • the panchromatic pixel groups correspond to different second phase weights under different light intensities.
  • the first phase weight corresponding to the color pixel group is 40%, among which, the phase weight of the green pixel group is 20%, the phase weight of the red pixel group is 10%, and the blue pixel group is 10%.
  • the phase weight of the pixel group is 10%.
  • the second phase weight corresponding to the panchromatic pixel group is 60%, which is not limited in this application.
  • a phase array of the first size of the pixel array in the first direction can be generated. For example, for this pixel array, based on the target phase information of the first red pixel group and its phase weight of 10%, the target phase information of the second red pixel group and its phase weight of 10%, the target phase of the first blue pixel group Information and phase weight 10%, target phase information and phase weight 10% for the second blue pixel group, and target phase information and phase weight 20% for each green pixel group, and target phase information for each panchromatic pixel group and the phase weight of 60%, and calculate the phase information of the pixel array in the first direction by summing together, that is, the phase array of the first size is obtained.
  • the target phase information of the color pixel group and its first phase weight, and the target pixel group of the panchromatic pixel group can be used.
  • the phase information and its second phase weight generate a phase array of the first size of the pixel array.
  • the phase array of the first size of the pixel array is jointly generated, which can improve the comprehensiveness of the phase information.
  • the phase weights of the target phase information of the color pixel group and the panchromatic pixel group are different under different light intensities. In this way, the accuracy of the phase information can be improved by adjusting the weights under different light intensities.
  • the pixel array can be an RGBW pixel array, including a plurality of minimum repeating units 241, the minimum repeating unit 241 includes a plurality of pixel groups 242, and the plurality of pixel groups 242 includes panchromatic pixel groups 243 and color pixel groups 244.
  • Each panchromatic pixel group 243 includes 9 panchromatic pixels 2431, and each color pixel group 244 includes 9 color pixels 2441.
  • the phase information output mode when the pixel array is an RGBW pixel array, the phase information output mode also includes a second-size output mode; wherein, the size of the phase array in the first-size output mode is larger than that of the phase array in the second-size output mode the size of;
  • phase information output mode adapted to the light intensity of the current shooting scene including:
  • the phase information output mode corresponding to the light intensity range is the second size output mode. Then, if it is determined that the light intensity of the current shooting scene is less than the second preset threshold, the light intensity of the current shooting scene falls within the light intensity range. That is, it is determined that the phase information output mode adapted to the light intensity of the current shooting scene is the second size output mode.
  • the second preset threshold may be 500 lux, which is not limited in this application. That is, it is at dusk or in an environment where the light intensity is less than 500lux.
  • outputting the phase array in the second size output mode is to combine and output the original phase information of the pixel array to generate the phase array of the pixel array.
  • the size of the pixel array is larger than the size of the phase array of the pixel array. For example, if the size of the pixel array is 12 ⁇ 12, the size of the phase array of each target pixel group in the pixel array is 4 ⁇ 2, and the size of the phase array is not limited in this application.
  • the size of the phase array in the first size output mode is greater than or equal to the size of the phase array in the second size output mode, if the light intensity of the current shooting scene is less than the second preset threshold, it is determined that the The phase information output mode adapted to the light intensity of the current shooting scene is the second size output mode, and at this time, the second size output mode is used to output the phase array with the same size as the pixel array. That is, when the light intensity of the current shooting scene is weaker, the signal-to-noise ratio of the phase information is improved by reducing the phase array.
  • the target pixel group from the color pixel group and the panchromatic pixel group in the pixel array includes:
  • phase information of the target pixel According to the phase information of the target pixel, generate a phase array corresponding to the pixel array, including:
  • a phase array of a second size corresponding to the pixel array is generated according to multiple sets of target combined phase information.
  • This embodiment is a specific implementation operation of outputting the phase array corresponding to the pixel array according to the second size output mode when the light intensity of the current shooting scene is less than the second preset threshold. Wherein, at this time, it is at dusk or in an environment with light intensity less than 500 lux. Wherein, firstly, two color pixel groups adjacent along the first diagonal direction in the pixel array are used as target pixel groups, and two panchromatic pixel groups adjacent along the second diagonal direction are used as target pixel groups.
  • the panchromatic pixel can capture more light information in an extremely dark scene, so at this time you can Two color pixel groups adjacent along the first diagonal direction in the pixel array are used as target pixel groups, and two panchromatic pixel groups adjacent along the second diagonal direction are used as target pixel groups. Combining the phase information of the target pixels of the two panchromatic pixel groups in the target pixel group, and combining the phase information of the target pixels in the two color pixel groups in the target pixel group to generate multiple groups of target combinations phase information.
  • a phase array of a second size corresponding to the pixel array is generated according to multiple sets of target combined phase information.
  • the process of combining the phase information of the target pixels of the two panchromatic pixel groups in the target pixel group can refer to the process of combining the phase information of the target pixels of the two panchromatic pixel groups in the next embodiment .
  • For combining the phase information of the target pixels of the two color pixel groups in the target pixel group refer to the process of combining the phase information of the target pixels of the two panchromatic pixel groups in the next embodiment. No more details will be given in this implementation.
  • the secondary pixel group in the pixel array is used as the target pixel group, include:
  • the generating a phase array corresponding to the pixel array according to the phase information of the target pixel includes:
  • a phase array of a second size corresponding to the pixel array is generated according to multiple sets of target combined phase information.
  • This embodiment is a specific implementation operation of outputting the phase array corresponding to the pixel array according to the second size output mode when the light intensity of the current shooting scene is less than the second preset threshold. Wherein, at this time, it is at dusk or in an environment with light intensity less than 500 lux. As shown in FIG. 16 , wherein, firstly, the two color pixel groups adjacent along the first diagonal direction in the pixel array are used as the target pixel group and the two panchromatic pixel groups adjacent along the second diagonal direction are used as the target pixel group. Target pixel group.
  • a pan-color pixel group may be selected as the target pixel group, or a color pixel group and a pan-color pixel group may be selected as the target pixel group.
  • the phase information of some panchromatic pixel groups in the pixel array can be used to achieve phase focusing, or part of the panchromatic pixel groups in the partial panchromatic pixel group can be used to achieve phase focusing.
  • Phase focusing is implemented, which is not limited in this application.
  • the phase information of a part of the color pixel group and a part of the panchromatic pixel group in the pixel array can be used to achieve phase focusing, or a part of the part of the color pixel group can be used
  • the phase focusing is implemented by using color pixels or some panchromatic pixels in a partial panchromatic pixel group, which is not limited in this application.
  • phase information of some pixel groups can be used for phase focusing, or only the phase information of some pixels in some pixel groups can be used for phase focusing, so the data volume of the output phase information is reduced, thereby improving improve the efficiency of phase focusing.
  • the target microlens is determined from the microlenses corresponding to the target pixel group, and at least two pixels corresponding to the target microlens are used as target pixels.
  • the target microlenses in the first direction corresponding to the target pixel group may be determined; the target microlenses in the first direction correspond to at least two adjacent target pixels arranged along the first direction. At least two pixels corresponding to the target microlenses in the first direction are used as target pixels. It is also possible to determine the target microlenses in the second direction corresponding to the target pixel group; the target microlenses in the second direction correspond to at least 2 target pixels arranged along the second direction and adjacent to each other; the second direction and the first direction are perpendicular to each other . At least two pixels corresponding to the target microlenses in the second direction are used as target pixels.
  • the target microlenses in the first direction and the target microlenses in the second direction corresponding to the target pixel group; the target microlenses in the first direction correspond to at least 2 target pixels arranged along the first direction and adjacent to each other; the second The target microlenses in the two directions correspond to at least two target pixels arranged along the second direction and adjacent to each other. At least two pixels corresponding to the target microlens in the first direction and at least two pixels corresponding to the target microlens in the second direction are used as target pixels.
  • each target pixel group the phase information of each target pixel in the target pixel group is acquired.
  • each pixel corresponds to a photosensitive element.
  • the panchromatic pixel groups in each target pixel group are obtained Phase information for each target pixel.
  • a pixel array may include 2 red pixel groups, 4 green pixel groups, 2 blue pixel groups and 8 panchromatic pixel groups. Assuming that the two adjacent panchromatic pixel groups along the second diagonal direction in the pixel array are both used as target pixel groups, then for the 8 panchromatic pixel groups included in the pixel array, that is, 4 pairs of panchromatic pixels groups, and sequentially calculate the phase information of each target pixel group. For example, for a panchromatic pixel group in each target pixel group, phase information of the panchromatic pixel group is calculated.
  • the first panchromatic pixel group includes 9 panchromatic pixels arranged in a 3 ⁇ 3 array, which are sequentially numbered as panchromatic pixel 1, panchromatic pixel 2, Panchromatic pixel 3, panchromatic pixel 4, panchromatic pixel 5, panchromatic pixel 6, panchromatic pixel 7, panchromatic pixel 8, panchromatic pixel 9.
  • each pixel corresponds to a photosensitive element.
  • the phase information of panchromatic pixel 1 is L1, the phase information of panchromatic pixel 2 is R1; the phase information of panchromatic pixel 8 is L2, and the phase information of panchromatic pixel 9 is L2; the phase information of panchromatic pixel 3 is U1 , the phase information of the panchromatic pixel 6 is D1; the phase information of the panchromatic pixel 4 is U1, and the phase information of the panchromatic pixel 7 is D1.
  • the pixel 5 at the center of the panchromatic pixel group corresponds to a microlens, therefore, no phase information is obtained.
  • the second panchromatic pixel group includes 9 panchromatic pixels arranged in a 3 ⁇ 3 array, which are sequentially numbered as panchromatic pixel 1, panchromatic pixel 2, Panchromatic pixel 3, panchromatic pixel 4, panchromatic pixel 5, panchromatic pixel 6, panchromatic pixel 7, panchromatic pixel 8, panchromatic pixel 9.
  • each pixel corresponds to a photosensitive element.
  • the phase information of panchromatic pixel 1 is L1, the phase information of panchromatic pixel 2 is R1; the phase information of panchromatic pixel 8 is L2, and the phase information of panchromatic pixel 9 is L2; the phase information of panchromatic pixel 3 is U1 , the phase information of the panchromatic pixel 6 is D1; the phase information of the panchromatic pixel 4 is U1, and the phase information of the panchromatic pixel 7 is D1.
  • the pixel 5 at the center of the panchromatic pixel group corresponds to a microlens, therefore, no phase information is obtained.
  • the target microlenses in the first direction corresponding to the target pixel group are determined, at least two pixels corresponding to the target microlenses in the first direction are used as target pixels. Assuming that the first direction is the horizontal direction, L1, R1, L2, R2 corresponding to the first panchromatic pixel group are sequentially output, and L1, R1, L2, R2 corresponding to the second panchromatic pixel group are sequentially output. Then, the phase information of the target pixels is combined to generate multiple sets of target combined phase information.
  • phase array of a second size corresponding to the pixel array is generated according to multiple sets of target phase information.
  • four sets of first target combined phase information L and first target combined phase information R are sequentially arranged to generate a second size phase array.
  • the size of the phase array of the second size is equivalent to the size of 4 ⁇ 2 pixels arranged in the array.
  • the conversion processing may be processing such as correcting the four sets of first target combined phase information L and first target combined phase information R, which is not limited in this application.
  • the phase array corresponding to the size of 4 ⁇ 2 pixels can be combined into a phase array with the size of 2 ⁇ 2 pixels.
  • the present application does not limit the specific size of the combined phase array.
  • the pixel size refers to the area size of a pixel, and the area size is related to the length and width of the pixel.
  • a pixel is the smallest photosensitive unit on a photosensitive device (CCD or CMOS) of a digital camera.
  • the size of the 4 ⁇ 2 pixels arranged in the array is: 4 ⁇ 0.0778 mm in length and 2 ⁇ 0.0778 mm in width.
  • the size of the full-scale phased array is 4 ⁇ 0.0778 mm in length and 2 ⁇ 0.0778 mm in width.
  • the pixels may not be rectangles with equal length and width, and the pixels may also have other heterogeneous structures, which are not limited in this application.
  • the target microlenses in the second direction corresponding to the target pixel group are determined, and at least two pixels corresponding to the target microlenses in the second direction are used as target pixels. Assuming that the second direction is the vertical direction, U1, D1, U2, and D2 corresponding to the first panchromatic pixel group are sequentially output, and U1, D1, U2, and D2 corresponding to the second panchromatic pixel group are sequentially output. Then, the phase information of the target pixels is combined to generate multiple sets of target combined phase information.
  • phase array of a second size corresponding to the pixel array is generated according to multiple sets of target phase information.
  • four sets of first target combined phase information U and first target combined phase information D are sequentially arranged to generate a second size phase array.
  • the size of the phase array of the second size is equivalent to the size of 4 ⁇ 2 pixels arranged in the array.
  • the target microlens in the first direction and the target microlens in the second direction corresponding to the target pixel group are determined, and at least two corresponding to the target microlens in the first direction and the target microlens in the second direction pixel as the target pixel.
  • the first direction is the horizontal direction and the second direction is the vertical direction
  • L1, R1, L2, R2, U1, D1, U2, and D2 in sequence, and then combine the phase information of the target pixels to generate multiple sets of targets Merge phase information.
  • phase information of the target pixel is combined to generate multiple sets of target combined phase information, and the multiple sets of target combined phase information are used as a phase array of a second size corresponding to the pixel array.
  • size of the second full-size phase array is equivalent to the size of 4 ⁇ 4 pixels arranged in the array.
  • panchromatic pixel groups and color pixel groups in the pixel array are used as target pixel groups
  • obtain the panchromatic pixel group and each color pixel group Phase information for each target pixel.
  • the phase information of the target pixels is combined to generate multiple sets of target combined phase information; finally, according to the multiple sets of target combined phase information, a second-size phase array corresponding to the pixel array is generated.
  • the specific calculation process is the same as the process of calculating the phase array of the second size when all the panchromatic pixel groups in the pixel array are used as the target pixel group, and will not be repeated here.
  • the phase array can be input into the ISP, and the phase difference of the pixel array can be calculated by the ISP based on the phase array. Then, calculate the defocus distance based on the phase difference, and calculate the DAC code value corresponding to the defocus distance. Finally, the code value is converted into a driving current by the driver IC of the motor (VCM), and the motor drives the lens to move to the clear position.
  • VCM driver IC of the motor
  • the phase information collected by the color pixel group is not very accurate, and some color pixels The group may not have acquired phase information. Therefore, the two color pixel groups adjacent along the first diagonal direction and the two panchromatic pixel groups adjacent along the second diagonal direction in the pixel array are used as the target pixel groups, or the pixel groups along the second diagonal Two panchromatic pixel groups adjacent in the line direction are used as target pixel groups, and for each target pixel group, phase information of each pixel in the target pixel group is acquired. And the phase information of the target pixel can be obtained from at least one microlens of the microlenses in the first direction and the microlenses in the second direction, therefore, various kinds of phase information can be obtained.
  • the phase information of the target pixels is combined to generate multiple sets of target combined phase information.
  • a phase array of a second size corresponding to the pixel array is generated according to multiple sets of target combined phase information.
  • the phase information of the target pixel is greatly combined by adopting the second size output mode, so as to improve the accuracy of the output phase information and improve the signal-to-noise ratio of the phase information.
  • focusing accuracy can be improved by performing phase focusing based on the phase array of the second size corresponding to the pixel array.
  • the phase information of the target pixels is combined to generate multiple sets of target combined phase information, including:
  • Target pixels at the same position are in the same orientation in the first direction in the corresponding target microlenses;
  • phase information of the target pixel at the same position is combined to generate multiple sets of first combined phase information.
  • the target microlens in the first direction corresponding to the target pixel group is determined, and at least two target microlenses corresponding to the first direction in the first panchromatic pixel group and the second panchromatic pixel group are pixels as the target pixels.
  • the target pixels at the same position are in the same orientation in the first direction in the corresponding target microlens.
  • the first direction is the horizontal direction
  • output L1, R1, L2, and R2 corresponding to the first panchromatic pixel group in sequence output L1, R1, L2, and R2 corresponding to the second panchromatic pixel group in sequence, and then output the target
  • the phase information of the pixels is combined to generate multiple sets of target combined phase information.
  • the target pixel at the same position is determined among the pixels corresponding to the target microlens.
  • Target pixels at the same position are determined among the pixel 1 , pixel 2 , pixel 8 , and pixel 9 corresponding to the target microlens 2211 a in one direction (filled with gray in the figure).
  • the pixel on the left side of the target microlens 2211a (filled with gray in the figure) in the first direction is the target pixel
  • the pixel on the right side of the target microlens 2211a (filled in gray in the figure) in the first direction is determined to be the target pixel . That is to combine L1 and L2 in the first panchromatic pixel group and L1 and L2 in the second panchromatic pixel group to generate the first target combined phase information L; combine R1 and R2 in the first panchromatic pixel group and the second R1 and R2 in the two panchromatic pixel groups are combined to generate the first target combined phase information R.
  • Multiple sets of first combined phase information are generated based on multiple sets of first target combined phase information L and first target combined phase information R.
  • At least four target microlenses in the first direction in the target pixel group are obtained, and target pixels at the same position are determined from pixels corresponding to the at least four target microlenses in the first direction.
  • the phase information of the target pixel at the same position is combined to generate multiple sets of first combined phase information.
  • the second-size output mode is adopted to substantially combine the phase information of the pixels to improve the accuracy of the output phase information and improve the signal-to-noise ratio of the phase information.
  • focusing accuracy can be improved by performing phase focusing based on the phase array of the second size corresponding to the pixel array.
  • the vertical direction is named as the second direction here, and if at least two pixels corresponding to the target microlens in the second direction are used as target pixels, then the target pixel
  • the phase information of the target pixels in the group is combined to generate multiple sets of target combined phase information, including:
  • the target microlens in the second direction corresponding to the target pixel group is determined, and at least two target microlenses corresponding to the second direction in the first panchromatic pixel group and the second panchromatic pixel group are pixels as the target pixels.
  • the target pixels at the same position are at the same orientation in the second direction in the corresponding target microlenses.
  • U1, D1, U2, and D2 corresponding to the first panchromatic pixel group are sequentially output
  • U1, D1, U2, and D2 corresponding to the second panchromatic pixel group are sequentially output
  • the phase information of the target pixels is combined to generate multiple sets of target combined phase information.
  • At least four target microlenses in the second direction are obtained, and from at least four second The direction of the target pixel corresponding to the target microlens is determined in the same position as the target pixel.
  • Target pixels at the same position are determined among the pixel 3 , pixel 6 , pixel 3 , and pixel 7 corresponding to the target microlens 2211 a in the second direction (filled with oblique lines in the figure).
  • At least four target microlenses in the second direction in the target pixel group are acquired, and target pixels at the same position are determined from pixels corresponding to the at least four target microlenses in the second direction.
  • the second-size output mode is adopted to substantially combine the phase information of the pixels to improve the accuracy of the output phase information and improve the signal-to-noise ratio of the phase information.
  • focusing accuracy can be improved by performing phase focusing based on the phase array of the second size corresponding to the pixel array.
  • phase information of the target pixel of the panchromatic pixel group is combined to generate multiple sets of target combined phase information, including:
  • the target microlens in the first direction corresponding to the target pixel group is determined, and the target microlens in the first direction in the first panchromatic pixel group and the second panchromatic pixel group corresponds to At least two pixels of are used as target pixels.
  • the first direction is the horizontal direction
  • output L1, R1, L2, and R2 corresponding to the first panchromatic pixel group in sequence output L1, R1, L2, and R2 corresponding to the second panchromatic pixel group in sequence, and then output the target
  • the phase information of the pixels is combined to generate multiple sets of target combined phase information.
  • the target pixel at the same position is determined among the pixels corresponding to the target microlens.
  • Target pixels at the same position are determined among the pixel 1 , pixel 2 , pixel 8 , and pixel 9 corresponding to the target microlens 2211 a in one direction (filled with gray in the figure).
  • the pixel on the left side of the target microlens 2211a (filled with gray in the figure) in the first direction is the target pixel
  • the pixel on the right side of the target microlens 2211a (filled in gray in the figure) in the first direction is determined to be the target pixel . That is to combine L1 and L2 in the first panchromatic pixel group and L1 and L2 in the second panchromatic pixel group to generate the first target combined phase information L; combine R1 and R2 in the first panchromatic pixel group and the second R1 and R2 in the two panchromatic pixel groups are combined to generate the first target combined phase information R.
  • Multiple sets of first combined phase information are generated based on multiple sets of first target combined phase information L and first target combined phase information R.
  • the target microlens in the first direction and the target microlens in the second direction corresponding to the target pixel group are determined, and the first panchromatic pixel group and the second panchromatic pixel group in the first direction At least two pixels corresponding to the target microlenses are used as target pixels, and at least two pixels corresponding to the target microlenses in the second direction in the first panchromatic pixel group and the second panchromatic pixel group are also used as target pixels.
  • the target pixel at the same position is determined among the pixels corresponding to the target microlens.
  • Target pixels at the same position are determined among the pixel 1 , pixel 2 , pixel 8 , and pixel 9 corresponding to the target microlens 2211 a in one direction (filled with gray in the figure).
  • the pixel on the left side of the target microlens 2211a (filled with gray in the figure) in the first direction is the target pixel
  • the pixel on the right side of the target microlens 2211a (filled in gray in the figure) in the first direction is determined to be the target pixel . That is to combine L1 and L2 in the first panchromatic pixel group and L1 and L2 in the second panchromatic pixel group to generate the first target combined phase information L; combine R1 and R2 in the first panchromatic pixel group and the second R1 and R2 in the two panchromatic pixel groups are combined to generate the first target combined phase information R.
  • the target pixel at the same position is determined among the pixels corresponding to the target microlens.
  • Target pixels at the same position are determined among the pixel 3 , pixel 6 , pixel 3 , and pixel 7 corresponding to the target microlens 2211 a in the second direction (filled with oblique lines in the figure).
  • Multiple sets of second combined phase information are generated based on multiple sets of first target combined phase information L, first target combined phase information R, and multiple sets of second target combined phase information U and second target combined phase information D.
  • At least four target microlenses in the first direction and at least four target microlenses in the second direction of the two panchromatic pixel groups in the target pixel group are obtained, and from at least four target microlenses in the first direction Target pixels at the same position are determined among pixels corresponding to the microlenses and at least four target microlenses in the second direction.
  • Target pixels at the same position are determined among pixels corresponding to the microlenses and at least four target microlenses in the second direction.
  • the phase information can be obtained based on the target microlens in the first direction and the second direction at the same time, and then the phase information of the pixels can be greatly combined by using the second size output mode to improve the accuracy of the output phase information and improve Signal-to-noise ratio of phase information.
  • focusing accuracy can be improved by performing phase focusing based on the phase array of the second size corresponding to the pixel array.
  • the target pixel groups are two color pixel groups adjacent along the first diagonal direction and two panchromatic pixel groups adjacent along the second diagonal direction in the pixel array, then according to multiple The group object merges the phase information to generate a second-sized phase array corresponding to the pixel array, including:
  • the third phase weight corresponding to the color pixel group and the fourth phase weight corresponding to the panchromatic pixel group according to the light intensity of the current shooting scene; wherein, the third phase weight corresponding to the color pixel group is different under different light intensities, and the full color pixel group corresponds to the third phase weight.
  • the fourth phase weights corresponding to the color pixel groups are different under different light intensities;
  • a second-sized phase array of the pixel array in the second direction is generated.
  • the determined target pixel group includes two adjacent color pixel groups along the first diagonal direction in the pixel array and two adjacent color pixel groups along the second If there are two panchromatic pixel groups adjacent to each other in the diagonal direction, then, when generating the phase array of the second size based on the phase information of the target pixel group, the weights between different pixel groups may be considered. Wherein, the third phase weight corresponding to the color pixel group and the fourth phase weight corresponding to the panchromatic pixel group may be determined according to the light intensity of the current shooting scene.
  • the farther the light intensity of the current shooting scene is from the second preset threshold the smaller the third phase weight corresponding to the color pixel group at this time, and the larger the fourth phase weight corresponding to the panchromatic pixel group. Because the light intensity at this time is relatively small in a scene that is less than the second preset threshold, the greater the fourth phase weight corresponding to the panchromatic pixel group, the more accurate the acquired phase information is. As the light intensity increases, the closer the light intensity of the current shooting scene is to the second preset threshold, the greater the third phase weight corresponding to the color pixel group, and the smaller the fourth phase weight corresponding to the panchromatic pixel group.
  • the third phase weights corresponding to the color pixel groups under different light intensities are different
  • the fourth phase weights corresponding to the panchromatic pixel groups under different light intensities are different.
  • the third phase weight corresponding to the color pixel group is 40%, among which, the phase weight of the green pixel group is 20%, the phase weight of the red pixel group is 10%, and the blue pixel group is 10%.
  • the phase weight of the pixel group is 10%.
  • the fourth phase weight corresponding to the panchromatic pixel group is 60%, which is not limited in this application.
  • a phase array of the second size of the pixel array can be generated. For example, for this pixel array, based on multiple sets of target combined phase information and phase weight of 10% for the first red pixel group, multiple sets of target combined phase information and phase weight of 10% for the second red pixel group, the first blue Multiple sets of target combined phase information and phase weight of 10% for the color pixel group, multiple sets of target combined phase information and phase weight of 10% for the second blue pixel group, and multiple sets of target combined phase information and phase of each green pixel group The weight of 20%, as well as multiple sets of target combined phase information of each panchromatic pixel group and phase weight of 60%, are summed together to calculate the phase information of the pixel array in the first direction, that is, the phase array of the second size is obtained.
  • the second size phase of the pixel array can be generated based on multiple sets of target combined phase information of the color pixel group and its third phase weight, multiple sets of target combined phase information of the panchromatic pixel group and its fourth phase weight array.
  • the phase array of the second size of the pixel array is jointly generated, which can improve the comprehensiveness of the phase information.
  • the phase weights of the target phase information of the color pixel group and the panchromatic pixel group are different under different light intensities. In this way, the accuracy of the phase information can be improved by adjusting the weights under different light intensities.
  • phase information output mode before outputting the phase array corresponding to the pixel array according to the phase information output mode, it also includes:
  • phase information output mode output the phase array corresponding to the pixel array, including:
  • phase information output mode a phase array corresponding to the target pixel array is output.
  • the area of the image sensor is large, and the pixel array of the smallest unit included is tens of thousands. If all the phase information is extracted from the image sensor for phase focusing, the amount of phase information data is too large, resulting in an excessive amount of actual calculation. Large, therefore, waste system resources and reduce image processing speed.
  • the pixel array used for focus control can be extracted from multiple pixel arrays in the image sensor in advance according to the preset extraction ratio and preset extraction position. For example, extraction may be performed at a preset extraction ratio of 3%, that is, one pixel array is extracted from 32 pixel arrays as a pixel array for focus control. And the extracted pixel arrays are arranged as vertices of the hexagon, that is, the extracted pixel arrays form a hexagon. In this way, phase information can be uniformly obtained.
  • the present application does not limit the preset extraction ratio and preset extraction position.
  • the phase information output mode adapted to the light intensity of the current shooting scene can be determined.
  • the pixel array used for focus control according to the phase information output mode, output the phase array corresponding to the pixel array; wherein, the phase array includes the phase information corresponding to the target pixel in the pixel array.
  • the phase difference of the pixel array is calculated based on the phase array, and focus control is performed according to the phase difference.
  • the target pixel array is determined from multiple pixel arrays in the image sensor according to the preset extraction ratio and preset extraction position of the pixel array used for focus control. In this way, instead of using all the phase information in the image sensor for focusing, only the phase information corresponding to the target pixel array is used for focusing, which greatly reduces the amount of data and improves the speed of image processing.
  • the target pixel array is determined from multiple pixel arrays in the image sensor, so that the phase information can be obtained more uniformly. Ultimately, improving the accuracy of phase focusing.
  • a focus control method further comprising:
  • a first preset threshold and a second preset threshold of light intensity are determined according to the exposure parameter and the size of the pixel.
  • the threshold of the light intensity when determining the threshold of the light intensity, it may be determined according to the exposure parameters and the size of the pixel.
  • the exposure parameters include shutter speed, lens aperture size and sensitivity (ISO, light sensitivity ordinance).
  • the first preset threshold value and the second preset threshold value of the light intensity are determined according to the exposure parameters and the size of the pixel, and the light intensity range is divided into three ranges.
  • the phase information output mode corresponding to the light intensity range thereby realizing more refined phase information calculation.
  • a focus control device 1700 is provided, which is applied to an image sensor, and the device includes:
  • the phase information output mode determination module 1720 is configured to determine a phase information output mode adapted to the light intensity of the current shooting scene according to the light intensity of the current shooting scene; wherein, in different phase information output modes, the output phase array of different sizes;
  • the phase array output module 1740 is configured to output the phase array corresponding to the pixel array according to the phase information output mode; wherein, the phase array includes phase information corresponding to the target pixel in the pixel array;
  • the focus control module 1760 is configured to calculate the phase difference of the pixel array based on the phase array, and perform focus control according to the phase difference.
  • the phase array output module 1740 includes:
  • the target pixel determination unit 1742 is configured to determine the target pixel group from the color pixel group and the panchromatic pixel group in the pixel array according to the phase information output mode, determine the target microlens corresponding to the target pixel group, and convert the target microlens into The corresponding at least two pixels are used as target pixels;
  • phase information generating unit 1744 configured to acquire phase information of the target pixel for each target pixel group
  • the phase array generation unit 1746 is configured to generate a phase array corresponding to the pixel array according to the phase information of the target pixel.
  • the target pixel determining unit 1742 is further configured to determine the target microlenses in the first direction corresponding to the target pixel group; the target microlenses in the first direction correspond to at least 2 adjacent pixels arranged along the first direction target pixels; at least two pixels corresponding to the target microlenses in the first direction are used as target pixels.
  • the target pixel determining unit 1742 is further configured to determine the target microlenses in the second direction corresponding to the target pixel group; the target microlenses in the second direction correspond to at least 2 adjacent microlenses arranged along the second direction target pixels; at least two pixels corresponding to the target microlenses in the second direction are used as target pixels.
  • the target pixel determining unit 1742 is further configured to determine the target microlenses in the first direction and the target microlenses in the second direction corresponding to the target pixel group; the target microlenses in the first direction correspond to the target microlenses along the first direction Arranged and adjacent at least 2 target pixels; the target microlenses in the second direction correspond to at least 2 target pixels arranged in the second direction and adjacent to each other; the second direction and the first direction are perpendicular to each other; the first direction
  • the at least two pixels corresponding to the target microlens and the at least two pixels corresponding to the target microlens in the second direction are used as target pixels.
  • the phase information output mode determination module 1720 is also used to determine the target light intensity range to which the light intensity of the current shooting scene belongs; wherein, different light intensity ranges correspond to different phase information output modes; according to the target light intensity Range, to determine the phase information output mode adapted to the light intensity of the current shooting scene.
  • the phase information output mode includes a full-size output mode and a first-size output mode, and the size of the phase array in the full-size output mode is greater than or equal to the size of the phase array in the first-size output mode;
  • the phase information output mode determination module 1720 includes:
  • a full-size output mode determining unit configured to determine that the phase information output mode adapted to the light intensity of the current shooting scene is the full-size output mode if the light intensity of the current shooting scene is greater than a first preset threshold
  • the first size output mode determining unit is configured to determine a phase information output mode adapted to the light intensity of the current shooting scene if the light intensity of the current shooting scene is greater than the second preset threshold and less than or equal to the first preset threshold It is the first size output mode; the first preset threshold is greater than the second preset threshold.
  • the target pixel determination unit 1742 is further configured to use the color pixel group in the pixel array as the target pixel group; or use the color pixel group in the pixel array Pixel groups and panchromatic pixel groups are used as target pixel groups; wherein, one target pixel group corresponds to one pixel group;
  • the phase array generation unit 1746 is further configured to generate a full-size phase array corresponding to the pixel array according to the phase information of the target pixel.
  • the target pixel determination unit 1742 is further configured to use at least one of the color pixel group and the panchromatic pixel group in the pixel array as the target pixel group ;
  • the target pixel group corresponds to a pixel group;
  • the phase array generating unit 1746 is further configured to combine the phase information of the target pixel in the target pixel group to generate multiple sets of intermediate combined phase information; according to the multiple sets of intermediate combined phase information, generate a first size corresponding to the pixel array phased array.
  • the phase array generation unit 1746 is further configured to acquire at least two target microlenses in the first direction in the target pixel group lens, and determine the target pixel at the same position from the pixels corresponding to the target microlens in at least two first directions; the target pixel at the same position is in the first direction in the corresponding target microlens in the same orientation; combining the phase information of the target pixels in the same position to generate multiple sets of first combined phase information.
  • the phase array generating unit 1746 is further configured to acquire at least two target microlenses in the second direction in the target pixel group lens, and determine the target pixel at the same position from the pixels corresponding to the target microlens in at least two second directions; the target pixel at the same position is in the second direction in the corresponding target microlens In the same orientation; combining the phase information of the target pixels in the same position to generate multiple sets of second combined phase information.
  • the phase array generating unit 1746 is further used to Obtain at least two target microlenses in the first direction in the target pixel group, and determine the target pixels at the same position from the pixels corresponding to the target microlenses in the at least two first directions; the target pixels at the same position are The corresponding target microlenses are in the same orientation in the first direction; combining the phase information of the target pixels at the same position to generate multiple sets of first combined phase information; acquiring at least two of the target pixel groups The target microlens in the second direction, and determine the target pixel at the same position from the pixels corresponding to the target microlenses in at least two second directions; the target pixel at the same position in the corresponding target microlens The second direction is in the same orientation; combining the phase information of the target pixel at the same position to generate multiple sets of
  • the phase array generation unit 1746 is further configured to determine the first phase weight and the panchromatic pixel group corresponding to the color pixel group according to the light intensity of the current shooting scene.
  • the second phase weights corresponding to the pixel groups wherein, the first phase weights corresponding to the color pixel groups under different light intensities are different, and the second phase weights corresponding to the panchromatic pixel groups under different light intensities are different; based on color Multiple sets of intermediate combined phase information and first phase weights corresponding to pixel groups, multiple sets of intermediate combined phase information and second phase weights corresponding to panchromatic pixel groups generate a phase array of the first size of the pixel array.
  • the phase information output mode further includes a second size output mode; wherein, the size of the phase array in the first size output mode is greater than or equal to the size of the phase array in the second size output mode;
  • the phase information output mode determination module 1720 includes:
  • the second size output mode determining unit is configured to determine that the phase information output mode adapted to the light intensity of the current shooting scene is the second size output mode if the light intensity of the current shooting scene is less than a second preset threshold.
  • the target pixel determining unit 1742 is further configured to use two color pixel groups adjacent along the first diagonal direction in the pixel array as The target pixel group and two adjacent panchromatic pixel groups along the second diagonal direction are used as the target pixel group;
  • the phase array generating unit 1746 is further configured to combine the phase information of the target pixels of the two panchromatic pixel groups in the target pixel group, and combine the phase information of the target pixels of the two color pixel groups in the target pixel group
  • the phase information of the target is combined to generate multiple sets of target combined phase information; according to the multiple sets of target combined phase information, a phase array of a second size corresponding to the pixel array is generated.
  • the target pixel determining unit 1742 is further configured to use two adjacent panchromatic pixel groups along the second diagonal direction as target pixel groups;
  • the phase array generation unit 1746 is further configured to combine the phase information of the target pixels of the two panchromatic pixel groups in the target pixel group to generate multiple sets of target combined phase information; according to multiple sets of target combined phase information, generate A phase array of a second size corresponding to the pixel array.
  • the phase array generation unit 1746 is further configured to obtain at least four pixels of the two panchromatic pixel groups in the target pixel group target microlenses in a first direction, and determine the target pixels at the same position from the pixels corresponding to at least four target microlenses in the first direction; the target pixels at the same position are in the corresponding target microlenses are in the same orientation in the first direction; combining the phase information of the target pixels at the same position to generate multiple sets of first combined phase information.
  • the phase array generation unit 1746 is further configured to obtain at least four pixels in the two panchromatic pixel groups of the target pixel group target microlenses in a second direction, and determine the target pixels at the same position from the pixels corresponding to the target microlenses in at least four second directions; the target pixels at the same position are in the corresponding target microlenses The second directions are in the same orientation; combining the phase information of the target pixels at the same position to generate multiple sets of second combined phase information.
  • phase array generation unit 1746 further for
  • the target pixels at the same position are in the same orientation in the first direction in the corresponding target microlens; combining the phase information of the target pixels at the same position to generate multiple sets of first combined phase information;
  • the target pixels at the same position are at the same orientation in the second direction in the corresponding target microlens; combining the phase information of the target pixels at the same position to generate multiple sets of second combined phase information.
  • the phase array The generation unit 1746 is further configured to determine the third phase weight corresponding to the color pixel group and the fourth phase weight corresponding to the panchromatic pixel group according to the light intensity of the current shooting scene; wherein, the corresponding phase weight of the color pixel group under different light intensities The third phase weight is different, and the fourth phase weight corresponding to the panchromatic pixel group under different light intensities is different; based on the multi-group target combined phase information and the third phase weight corresponding to the color pixel group, the multiple corresponding to the panchromatic pixel group
  • the group object combines the phase information and the fourth phase weight to generate a second-sized phase array of pixel arrays.
  • a focus control device further comprising:
  • a target pixel array determining module configured to determine the target pixel array from multiple pixel arrays in the image sensor according to the preset extraction ratio and preset extraction position of the pixel array used for focus control;
  • the phase array output module 1740 is further configured to output the phase array corresponding to the target pixel array according to the phase information output mode.
  • a focus control device further comprising:
  • the threshold determination module is used to determine the first preset threshold, the second preset threshold and the third preset threshold of the light intensity according to the exposure parameter and the size of the pixel.
  • each module in the above-mentioned focus control device is only for illustration. In other embodiments, the focus control device can be divided into different modules according to needs, so as to complete all or part of the functions of the above-mentioned focus control device.
  • Each module in the above-mentioned focusing control device can be fully or partially realized by software, hardware and a combination thereof.
  • the above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
  • Fig. 19 is a schematic diagram of the internal structure of an electronic device in one embodiment.
  • the electronic device can be any terminal device such as mobile phone, tablet computer, notebook computer, desktop computer, PDA (Personal Digital Assistant, personal digital assistant), POS (Point of Sales, sales terminal), vehicle-mounted computer, wearable device, etc.
  • the electronic device includes a processor and memory connected by a system bus.
  • the processor may include one or more processing units.
  • the processor can be a CPU (Central Processing Unit, central processing unit) or a DSP (Digital Signal Processing, digital signal processor), etc.
  • the memory may include non-volatile storage media and internal memory. Nonvolatile storage media store operating systems and computer programs.
  • the computer program can be executed by a processor, so as to implement a focus control method provided in the following embodiments.
  • the internal memory provides a high-speed running environment for the operating system computer program in the non-volatile storage medium.
  • each module in the focus control device provided in the embodiment of the present application may be in the form of a computer program.
  • the computer program can run on the electronic device.
  • the program modules constituted by the computer program can be stored in the memory of the electronic device.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform operations of the focus control method.
  • the embodiment of the present application also provides a computer program product including instructions, which, when running on a computer, causes the computer to execute the focusing control method.
  • Non-volatile memory can include ROM (Read-Only Memory, read-only memory), PROM (Programmable Read-only Memory, programmable read-only memory), EPROM (Erasable Programmable Read-Only Memory, erasable programmable read-only memory) Memory), EEPROM (Electrically Erasable Programmable Read-only Memory, Electrically Erasable Programmable Read-only Memory) or flash memory.
  • Volatile memory can include RAM (Random Access Memory, Random Access Memory), which is used as external cache memory.
  • RAM is available in various forms, such as SRAM (Static Random Access Memory, static random access memory), DRAM (Dynamic Random Access Memory, dynamic random access memory), SDRAM (Synchronous Dynamic Random Access Memory , synchronous dynamic random access memory), double data rate DDR SDRAM (Double Data Rate Synchronous Dynamic Random Access memory, double data rate synchronous dynamic random access memory), ESDRAM (Enhanced Synchronous Dynamic Random Access memory, enhanced synchronous dynamic random access memory access memory), SLDRAM (Sync Link Dynamic Random Access Memory, synchronous link dynamic random access memory), RDRAM (Rambus Dynamic Random Access Memory, bus dynamic random access memory), DRDRAM (Direct Rambus Dynamic Random Access Memory, interface dynamic random access memory) memory).
  • SRAM Static Random Access Memory, static random access memory
  • DRAM Dynanamic Random Access Memory, dynamic random access memory
  • SDRAM Synchronous Dynamic Random Access Memory , synchronous dynamic random access memory
  • double data rate DDR SDRAM Double Data Rate Synchronous Dynamic Random Access memory, double

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Focusing (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

本申请涉及一种对焦控制方法和装置、图像传感器、电子设备、计算机可读存储介质,应用于图像传感器,该方法包括:根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同(1020)。按照相位信息输出模式,输出与像素阵列对应的相位阵列;其中,相位阵列包括像素阵列中目标像素对应的相位信息(1040)。基于相位阵列计算像素阵列的相位差,并根据相位差进行对焦控制(1060)。

Description

对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质
本申请要求于2021年12月27日提交中国专利局,申请号为202111617501.0,发明名称为“对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质。
背景技术
随着电子设备的发展,越来越多的用户通过电子设备拍摄图像。为了保证拍摄的图像清晰,通常需要对电子设备的摄像模组进行对焦,即通过调节镜头与图像传感器之间的距离,以使拍摄对象成像在焦平面上。传统的对焦方式包括相位检测自动对焦(英文:phase detection auto focus;简称:PDAF)。
传统的相位检测自动对焦,主要是基于RGB像素阵列来计算相位差,然后再基于相位差来控制马达,进而由马达驱动镜头移动至合适的位置进行对焦,以使拍摄对象成像在焦平面上。
然而,由于RGB像素阵列在不同的光线强度下的感光度不同,因此,在部分光线强度下,通过RGB像素阵列所计算出的相位差的准确性较低,进而导致对焦的准确性也大幅降低。
发明内容
本申请实施例提供了一种对焦控制方法、装置、电子设备、图像传感器、计算机可读存储介质,可以提高对焦的准确性。
一方面,提供了一种图像传感器,所述图像传感器包括微透镜阵列、像素阵列及滤光片阵列,所述滤光片阵列包括最小重复单元,所述最小重复单元包括多个滤光片组,所述滤光片组包括彩色滤光片和全色滤光片;所述彩色滤光片具有比所述全色滤光片更窄的光谱响应,所述彩色滤光片和所述全色滤光片均包括阵列排布的9个子滤光片;
其中,所述像素阵列包括多个像素组,所述像素组为全色像素组或彩色像素组,每个所述全色像素组对应所述全色滤光片,每个所述彩色像素组对应所述彩色滤光片;所述全色像素组和所述彩色像素组均包括9个像素,所述像素阵列的像素与所述滤光片阵列的子滤光片对应设置,且每个像素对应一个感光元件;
其中,所述微透镜阵列包括多个微透镜组,每个微透镜组对应所述全色像素组或所述彩色像素组,所述微透镜组中包括多个微透镜,所述多个微透镜中的至少一个微透镜对应至少两个像素。
另一方面,提供了一种对焦控制方法,应用于如上所述的图像传感器,所述方法包括:
根据当前拍摄场景的光线强度,确定与所述当前拍摄场景的光线强度适配的相位信息输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同;
按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列;其中,所述相位阵列包括所述像素阵列中目标像素对应的相位信息;
基于所述相位阵列计算所述像素阵列的相位差,并根据所述相位差进行对焦控制。
另一方面,提供了一种对焦控制装置,应用于如上所述的图像传感器,所述装置包括:
相位信息输出模式确定模块,用于根据当前拍摄场景的光线强度,确定与所述当前拍摄场景的光线强度适配的相位信息输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同;
相位阵列输出模块,用于按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列;其中,所述相位阵列包括所述像素阵列中目标像素对应的相位信息;
对焦控制模块,用于基于所述相位阵列计算所述像素阵列的相位差,并根据所述相位差进行对焦控 制。
另一方面,提供了一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如上所述的对焦控制方法的操作。
另一方面,提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的方法的操作。
另一方面,提供了一种计算机程序产品,包括计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行时实现如上所述的对焦控制方法的操作。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中电子设备的结构示意图;
图2为相位检测自动对焦的原理示意图;
图3为在图像传感器包括的像素点中成对地设置相位检测像素点的示意图;
图4为一个实施例中图像传感器的分解示意图;
图5为一个实施例中像素阵列和读出电路的连接示意图;
图6为一个实施例中像素阵列的最小重复单元的排布方式示意图;
图7为另一个实施例中像素阵列的最小重复单元的排布方式示意图;
图8为一个实施例中微透镜阵列的最小重复单元的排布方式示意图;
图9为另一个实施例中微透镜阵列的最小重复单元的排布方式示意图;
图10为一个实施例中对焦控制方法的流程图;
图11为一个实施例中按照相位信息输出模式,输出与像素阵列对应的相位阵列方法的流程图;
图12为图11中根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式方法的流程图;
图13为图12中根据目标光线强度范围,确定与当前拍摄场景的光线强度适配的相位信息输出模式方法的流程图;
图14为一个实施例中生成全尺寸相位阵列的示意图;
图15为一个实施例中生成第一尺寸相位阵列方法的流程图;
图16为一个实施例中生成第二尺寸相位阵列的示意图;
图17为一个实施例中对焦控制装置的结构框图;
图18为图17中相位阵列输出模块的结构框图;
图19为一个实施例中电子设备的内部结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”、“第三”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一尺寸称为第二尺寸,且类似地,可将第二尺寸称为第一尺寸。第一尺寸和第二尺寸两者都是尺寸,但其不是同一尺寸。可以将第一预设阈值称为第二预设阈值,且类似地,可将第二预设阈值称为第一预设阈值。第一预设阈值和第二预设阈值两者都是预设阈值,但其不是同一预设阈值。
图1为一个实施例中对焦控制方法的应用环境示意图。如图1所示,该应用环境包括电子设备100。电子设备100包括图像传感器,图像传感器包括像素阵列,电子设备根据当前拍摄场景的光线强度,确 定与当前拍摄场景的光线强度适配的相位信息输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同;按照相位信息输出模式,输出与像素阵列对应的相位阵列;其中,相位阵列包括像素阵列对应的相位信息;基于相位阵列计算像素阵列的相位差,并根据相位差进行对焦控制。其中,电子设备可以是手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、穿戴式设备(智能手环、智能手表、智能眼镜、智能手套、智能袜子、智能腰带等)、VR(virtual reality,虚拟现实)设备、智能家居、无人驾驶汽车等任意具有图像处理功能的终端设备。
其中,电子设备100包括相机20、处理器30和壳体40。相机20和处理器30均设置在壳体40内,壳体40还可用于安装终端100的供电装置、通信装置等功能模块,以使壳体40为功能模块提供防尘、防摔、防水等保护。相机20可以是前置相机、后置相机、侧置相机、屏下相机等,在此不做限制。相机20包括镜头及图像传感器21,相机20在拍摄图像时,光线穿过镜头并到达图像传感器21,图像传感器21用于将照射到图像传感器21上的光信号转化为电信号。
图2为相位检测自动对焦(phase detection auto focus,PDAF)的原理示意图。如图2所示,M1为成像设备处于合焦状态时,图像传感器所处的位置,其中,合焦状态指的是成功对焦的状态。若图像传感器位于M1位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在图像传感器上会聚,也即是,由物体W反射向镜头Lens的不同方向上的成像光线g在图像传感器上的同一位置处成像,此时,图像传感器成像清晰。
M2和M3为成像设备不处于合焦状态时,图像传感器所可能处于的位置,如图2所示,若图像传感器位于M2位置或M3位置时,由物体W反射向镜头Lens的不同方向上的成像光线g会在不同的位置成像。请参考图2,若图像传感器位于M2位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在位置A和位置B分别成像,若图像传感器位于M3位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在位置C和位置D分别成像,此时,图像传感器成像不清晰。
在PDAF技术中,可以获取从不同方向射入镜头的成像光线在图像传感器中所成的像在位置上的差异,例如,如图2所示,可以获取位置A和位置B的差异,或者,获取位置C和位置D的差异;在获取到从不同方向射入镜头的成像光线在图像传感器中所成的像在位置上的差异之后,可以根据该差异以及摄像机中镜头与图像传感器之间的几何关系,得到离焦距离,所谓离焦距离指的是图像传感器当前所处的位置与合焦状态时图像传感器所应该处于的位置的距离;成像设备可以根据得到的离焦距离进行对焦。
由此可知,合焦时,计算得到的PD值为0,反之算出的值越大,表示离合焦点的位置越远,值越小,表示离合焦点越近。采用PDAF对焦时,通过计算出PD值,再根据标定得到PD值与离焦距离之间的对应关系,可以求得离焦距离,然后根据离焦距离控制镜头移动达到合焦点,以实现对焦。
相关技术中,可以在图像传感器包括的像素点中成对地设置一些相位检测像素点,如图3所示,图像传感器中可以设置有相位检测像素点对(以下称为像素点对)A,像素点对B和像素点对C。其中,在每个像素点对中,一个相位检测像素点进行左侧遮挡(英文:Left Shield),另一个相位检测像素点进行右侧遮挡(英文:Right Shield)。
对于进行了左侧遮挡的相位检测像素点而言,射向该相位检测像素点的成像光束中仅有右侧的光束才能在该相位检测像素点的感光部分(也即是未被遮挡的部分)上成像,对于进行了右侧遮挡的相位检测像素点而言,射向该相位检测像素点的成像光束中仅有左侧的光束才能在该相位检测像素点的感光部分(也即是未被遮挡的部分)上成像。这样,就可以将成像光束分为左右两个部分,通过对比左右两部分成像光束所成的像,即可得到相位差。
在一个实施例中,进一步描述了图像传感器包括像素阵列及滤光片阵列,滤光片阵列包括最小重复单元,最小重复单元包括多个滤光片组,滤光片组包括彩色滤光片和全色滤光片,在滤光片组中彩色滤光片设置在第一对角线方向,全色滤光片设置在第二对角线方向,第一对角线方向与第二对角线方向不同;彩色滤光片具有比全色滤光片更窄的光谱响应,彩色滤光片和全色滤光片均包括阵列排布的9个子滤光片;
其中,像素阵列包括多个全色像素组和多个彩色像素组,每个全色像素组对应全色滤光片,每个彩 色像素组对应彩色滤光片;全色像素组和彩色像素组均包括9个像素,像素阵列的像素与滤光片阵列的子滤光片对应设置,且每个像素对应一个感光元件。
其中,微透镜阵列包括多个微透镜组,每个微透镜组对应全色像素组或彩色像素组,微透镜组中包括多个微透镜,多个微透镜中的至少一个微透镜对应至少两个像素。
本申请实施例中,由于多个微透镜中的至少一个微透镜对应至少两个像素。因此,就可以基于该至少两个像素的相位信息,计算像素阵列的相位差。
如图4所示,图像传感器21包括微透镜阵列22、滤光片阵列23、像素阵列24。
微透镜阵列22包括多个最小重复单元221,最小重复单元221包括多个微透镜组222,微透镜组222包括多个微透镜2221。滤光片阵列23中的子滤光片和像素阵列24中的像素一一对应设置,多个微透镜2221中的至少一个微透镜对应至少两个像素。微透镜2221用于将入射的光线进行聚集,聚集之后的光线会穿过对应的子滤光片,然后投射至像素上,被对应的像素接收,像素再将接收的光线转化成电信号。
滤光片阵列23包括多个最小重复单元231。最小重复单元231可包括多个滤光片组232。每个滤光片组232包括全色滤光片233和彩色滤光片234,彩色滤光片234具有比全色滤光片233的更窄的光谱响应。每个全色滤光片233中包括9个子滤光片2331,每个彩色滤光片234中包括9个子滤光片2341。在不同的滤光片组中还包括有不同的彩色滤光片234。
最小重复单元231中的滤光片组232的彩色滤光片234的透过的光线的波段对应的颜色包括颜色a、颜色b和/或颜色c。滤光片组232的彩色滤光片234的透过的光线的波段对应的颜色包括颜色a、颜色b和颜色c、或者颜色a、颜色b或颜色c、或者颜色a和颜色b、或者颜色b和颜色c、或者颜色a和颜色c。其中,颜色a为红色,颜色b为绿色,颜色c为蓝色,或者例如颜色a为品红色,颜色b为青色,颜色c为黄色等,在此不做限制。
在一个实施例中,彩色滤光片234的透过的光线的波段的宽度小于全色滤光片233透过的光线的波段的宽度,例如,彩色滤光片234的透过的光线的波段可对应红光的波段、绿光的波段、或蓝光的波段,全色滤光片233透过的光线的波段为所有可见光的波段,也即是说,彩色滤光片234仅允许特定颜色光线透光,而全色滤光片233可通过所有颜色的光线。当然,彩色滤光片234的透过的光线的波段还可对应其他色光的波段,如品红色光、紫色光、青色光、黄色光等,在此不作限制。
在一个实施例中,滤光片组232中彩色滤光片234的数量和全色滤光片233的数量的比例可以是1:3、1:1或3:1。例如,彩色滤光片234的数量和全色滤光片233的数量的比例为1:3,则彩色滤光片234的数量为1,全色滤光片233的数量为3,此时全色滤光片233数量较多,相较于传统的只有彩色滤光片的情况,在暗光下可以通过全色滤光片233获取到更多的相位信息,因此对焦质量更好;或者,彩色滤光片234的数量和全色滤光片233的数量的比例为1:1,则彩色滤光片234的数量为2,全色滤光片233的数量为2,此时既可以获得较好的色彩表现的同时,在暗光下可以通过全色滤光片233获取到更多的相位信息,因此对焦质量也较好;或者,彩色滤光片234的数量和全色滤光片233的数量的比例为3:1,则彩色滤光片234的数量为3,全色滤光片233的数量为1,此时可获得更好的色彩表现,且同理也能提高暗光下的对焦质量。
像素阵列24包括多个像素,像素阵列24的像素与滤光片阵列23的子滤光片对应设置。像素阵列24被配置成用于接收穿过滤光片阵列23的光线以生成电信号。
其中,像素阵列24被配置成用于接收穿过滤光片阵列23的光线以生成电信号,是指像素阵列24用于对穿过滤光片阵列23的被摄对象的给定集合的场景的光线进行光电转换,以生成电信号。被摄对象的给定集合的场景的光线用于生成图像数据。例如,被摄对象是建筑物,被摄对象的给定集合的场景是指该建筑物所处的场景,该场景中还可以包含其他对象。
在一个实施例中,像素阵列24可以是RGBW像素阵列,包括多个最小重复单元241,最小重复单元241包括多个像素组242,多个像素组242包括全色像素组243和彩色像素组244。每个全色像素组243中包括9个全色像素2431,每个彩色像素组244中包括9个彩色像素2441。每个全色像素2431对应全色滤光片233中的一个子滤光片2331,全色像素2431接收穿过对应的子滤光片2331的光线以生 成电信号。每个彩色像素2441对应彩色滤光片234的一个子滤光片2341,彩色像素2441接收穿过对应的子滤光片2341的光线以生成电信号。其中,每个像素对应一个感光元件。即每个全色像素2431对应一个感光元件;每个彩色像素2441对应一个感光元件。
本实施中的图像传感器21包括滤光片阵列23和像素阵列24,滤光片阵列23包括最小重复单元231,最小重复单元231包括多个滤光片组232,滤光片组包括全色滤光片233和彩色滤光片234,彩色滤光片234具有比全色滤光片233的更窄的光谱响应,在拍摄时可获取到更多的光量,从而无需调节拍摄参数,在不影响拍摄的稳定性的情况下,提高暗光下的对焦质量,暗光下对焦时,可兼顾稳定性和质量,暗光下对焦的稳定性和质量均较高。并且,每个全色滤光片233中包括9个子滤光片2331,每个彩色滤光片234中包括9个子滤光片2341,像素阵列24包括多个全色像素2431和多个彩色像素2441,每个全色像素2431对应全色滤光片233的一个子滤光片2331,每个彩色像素2441对应彩色滤光片234的一个子滤光片2341,全色像素2431和彩色像素2441用于接收穿过对应的子滤光片的光线以生成电信号,在暗光下对焦时可将9个子滤光片对应的像素的相位信息合并输出,得到信噪比较高的相位信息,而在光线较为充足的场景下,可将每个子滤光片对应的像素的相位信息单独进行输出,从而得到分辨率和信噪比均较高的相位信息,从而能够适配不同的应用场景,并能够提高在各场景下的对焦质量。
在一个实施例中,如图4所示,滤光片阵列23中的最小重复单元231包括4个滤光片组232,并且4个滤光片组232呈矩阵排列。每个滤光片组232包括全色滤光片233和彩色滤光片234,每个全色滤光片233和每个彩色滤光片234均有9个子滤光片,则该滤光片组232共包括36个子滤光片。
同样的,像素阵列24包括多个最小重复单元241,与多个最小重复单元231对应。每个最小重复单元241包括4个像素组242,并且4个像素组242呈矩阵排列。每个像素组242对应一个滤光片组232。
如图5所示,读出电路25与像素阵列24电连接,用于控制像素阵列24的曝光以及像素点的像素值的读取和输出。读出电路25包括垂直驱动单元251、控制单元252、列处理单元253和水平驱动单元254。垂直驱动单元251包括移位寄存器和地址译码器。垂直驱动单元251包括读出扫描和复位扫描功能。控制单元252根据操作模式配置时序信号,利用多种时序信号来控制垂直驱动单元251、列处理单元253和水平驱动单元254协同工作。列处理单元253可以具有用于将模拟像素信号转换为数字格式的模数(A/D)转换功能。水平驱动单元254包括移位寄存器和地址译码器。水平驱动单元254顺序逐列扫描像素阵列24。
在一个实施例中,如图6所示,每个滤光片组232均包括彩色滤光片234和全色滤光片233,滤光片组232中的各个全色滤光片233设置在第一对角线方向D1,滤光片组232中的各个彩色滤光片234设置在第二对角线方向。第一对角线D1方向和第二对角线D2方向不同,能够兼顾色彩表现和暗光对焦质量。
第一对角线D1方向与第二对角线D2方向不同,具体可以是第一对角线D1方向与第二对角线D2方向不平行,或者,第一对角线D1方向与第二对角线D2方向垂直等。
在其他实施方式中,一个彩色滤光片234和一个全色滤光片233可位于第一对角线D1,另一个彩色滤光片234和另一个全色滤光片233可位于第二对角线D2。
在一个实施例中,每个像素对应一个感光元件。其中,感光元件是一种能够将光信号转化为电信号的元件。例如,感光元件可为光电二极管。如图6所示,每个全色像素2331包括1个光电二极管PD(PhotoDiode)),每个彩色像素2341包括1个光电二极管PD(PhotoDiode))。
在一个实施例中,如图6所示,滤光片阵列23中的最小重复单元231包括4个滤光片组232,并且4个滤光片组232呈矩阵排列。每个滤光片组232中包含2个全色滤光片233和2个彩色滤光片234。全色滤光片233中包括9个子滤光片2331,彩色滤光片234中包括9个子滤光片2341,则最小重复单元231为12行12列144个子滤光片,排布方式为:
Figure PCTCN2022132425-appb-000001
Figure PCTCN2022132425-appb-000002
其中,w表示全色子滤光片2331,a、b和c均表示彩色子滤光片2341。全色子滤光片2331指的是可滤除可见光波段之外的所有光线的子滤光片,彩色子滤光片2341包括红色子滤光片、绿色子滤光片、蓝色子滤光片、品红色子滤光片、青色子滤光片和黄色子滤光片。红色子滤光片为滤除红光之外的所有光线的子滤光片,绿色子滤光片为滤除绿光之外的所有光线的子滤光片,蓝色子滤光片为滤除蓝光之外的所有光线的子滤光片,品红色子滤光片为滤除品红色光之外的所有光线的子滤光片,青色色子滤光片为滤除青光之外的所有光线的子滤光片,黄色子滤光片为滤除黄光之外的所有光线的子滤光片。
a可以是红色子滤光片、绿色子滤光片、蓝色子滤光片、品红色子滤光片、青色子滤光片或黄色子滤光片,b可以是红色子滤光片、绿色子滤光片、蓝色子滤光片、品红色子滤光片、青色子滤光片或黄色子滤光片,c可以是红色子滤光片、绿色子滤光片、蓝色子滤光片、品红色子滤光片、青色子滤光片或黄色子滤光片。例如,b为红色子滤光片、a为绿色子滤光片、c为蓝色子滤光片;或者,c为红色子滤光片、a为绿色子滤光片、b为蓝色子滤光片;再例如,c为红色子滤光片、a为绿色子滤光片、b为蓝色子滤光片;或者,a为红色子滤光片、b为蓝色子滤光片、c为绿色子滤光片等,在此不作限制;再例如,b为品红色子滤光片、a为青色子滤光片、b为黄色子滤光片等。在其他实施方式中,彩色滤光片还可包括其他颜色的子滤光片,如橙色子滤光片、紫色子滤光片等,在此不作限制。
在一个实施例中,如图7所示,滤光片阵列23中的最小重复单元231包括4个滤光片组232,并且4个滤光片组232呈矩阵排列。每个滤光片组232均包括彩色滤光片234和全色滤光片233,滤光片组232中的各个彩色滤光片234设置在第一对角线D1方向,滤光片组232中的各个全色滤光片233设置在第二对角线D2方向。且像素阵列(图7未示,可参考图6)的像素与滤光片阵列的子滤光片对应设置,且每个像素对应一个感光元件。
在一个实施例中,每个滤光片组232中包含2个全色滤光片233和2个彩色滤光片234,全色滤光片233中包括9个子滤光片2331,彩色滤光片234中包括9个子滤光片2341,则最小重复单元231为12行12列144个子滤光片,如图5所示,排布方式为:
Figure PCTCN2022132425-appb-000003
其中,w表示全色子滤光片,a、b和c均表示彩色子滤光片。
12行12列144个子滤光片结合了quad和RGBW的双重优势。quad的好处是可以局部同像素2乘2合并、3乘3合并(binning)得到不同分辨率的图像,具有高信噪比。quad全尺寸输出则具有高像素, 得到全尺寸全分辨率的图像,清晰度更高。RGBW的好处是,利用W像素提高图像整体的进光量,进而提升画质信噪比。
在一个实施例中,如图8中(a)所示,微透镜阵列22包括多个最小重复单元221,最小重复单元221包括多个微透镜组222,微透镜组222包括多个微透镜2221。每个微透镜组222对应全色像素组243或彩色像素组244,微透镜组222中包括多个微透镜2211,例如,微透镜组222中包括5个微透镜2211,该5个微透镜2211包括4个第一微透镜2211a及1个第二微透镜2211b;其中,第一微透镜2211a分别对应2个像素,第二微透镜2211b对应1个像素。且4个所述第一微透镜中包括2个第一方向的第一微透镜及2个第二方向的第一微透镜;所述第一方向的第一微透镜在所述微透镜组中沿所述第一方向设置;所述第二方向的第一微透镜在所述微透镜组中沿第二方向设置。其中,第二微透镜2211b位于微透镜组222的中心,且所述第二微透镜2211b对应与所述微透镜组对应的所述像素组的中心像素。
如图8中(a)所示,若所述第二微透镜2211b位于所述微透镜组的中心,则2个所述第一方向的第一微透镜(图8中(a)灰色填充)及2个所述第二方向的第一微透镜(图8中(a)斜线填充)围绕所述第二微透镜2211b设置;
其中,2个所述第一方向的第一微透镜相对于所述第二微透镜2211b沿第一对角线方向呈中心对称设置;2个所述第二方向的第一微透镜相对于所述第二微透镜2211b沿第二对角线方向呈中心对称设置。
如图8中(b)所示,在另一个实施例中,所述第二微透镜2211b位于所述微透镜组的四角之一,且所述第二微透镜2211b对应与所述微透镜组对应的所述像素组中处于四角之一的像素。例如,第二微透镜2211b位于微透镜组222的右下角。当然,第二微透镜2211b还可以位于微透镜组222的左下角;第二微透镜2211b还可以位于微透镜组222的左上角;第二微透镜2211b还可以位于微透镜组222的右上角,本申请对此不做限定。
在一个实施例中,多个微透镜中的一个微透镜对应至少四个像素。即多个微透镜2211中的一个微透镜对应至少四个像素。
在一个实施例中,如图9所示,例如,微透镜组222中包括4个微透镜2211,该4个微透镜包括1个第三微透镜2211c、2个第四微透镜2211d及1个第五微透镜2211e;其中,第三微透镜2211c分别对应4个像素,第四微透镜2211d对应2个像素,第五微透镜2211e对应1个像素。其中,这里的第四微透镜2211d可以是与第一微透镜2211a一样的微透镜;这里的第五微透镜2211e可以是与第二微透镜2211b一样的微透镜,本申请对此不做限定。
在一个实施例中,如图10所示,提供了一种对焦控制方法,应用于如上述实施例中的图像传感器,图像传感器包括像素阵列及滤光片阵列,该方法包括:
操作1020,根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同。
在不同拍摄场景或不同时刻,当前拍摄场景的光线强度均不尽相同,而由于RGB像素阵列在不同的光线强度下的感光度不同,因此,在部分光线强度下,通过RGB像素阵列所计算出的相位差的准确性较低,进而导致对焦的准确性也大幅降低。其中,光线强度又称之为光照强度,光照强度是一种物理术语,指单位面积上所接受可见光的光通量,简称照度,单位勒克斯(Lux或lx)。光照强度用于指示光照的强弱和物体表面积被照明程度的量。下表为不同天气及位置下的光照强度值:
表1-1
天气及位置 光照强度值
晴天阳光直射地面 100000lx
晴天室内中央 200lx
阴天室外 50-500lx
阴天室内 5-50lx
月光(满月) 2500lx
晴朗月夜 0.2lx
黑夜 0.0011lx
从上述表1-1中可知,在拍摄场景或不同时刻,当前拍摄场景的光线强度相差较大。
为了解决在部分光线强度下,通过RGB像素阵列所计算出的相位差的准确性较低,进而导致对焦的准确性也大幅降低这个问题,在当前拍摄场景不同的光线强度下,确定与当前拍摄场景的光线强度适配的相位信息输出模式,再分别采用不同的相位信息输出模式来输出像素阵列的相位信息。相位信息输出模式指的是基于像素阵列的原始相位信息,对原始相位信息进行处理以生成最终所输出的该像素阵列的相位信息的模式。
其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同。即在当前拍摄场景不同的光线强度下,同一像素阵列所输出的相位阵列的大小不同。换言之,在当前拍摄场景不同的光线强度下,将同一像素阵列对应的相位信息直接输出作为该像素阵列对应的相位阵列或进行一定程度地合并,生成该像素阵列对应的相位阵列。例如,若当前拍摄场景的光线强度较大,则可以将同一像素阵列对应的相位信息直接输出作为该像素阵列对应的相位阵列。此时所输出的相位阵列的大小等于该像素阵列的尺寸。若当前拍摄场景的光线强度较小,则可以将同一像素阵列对应的相位信息进行一定程度地合并,生成该像素阵列对应的相位阵列。此时所输出的相位阵列的大小小于该像素阵列的尺寸。
由于不同尺寸的相位阵列的信噪比是不同的,因此,可以提高在不同的光线强度下所输出的相位信息的准确性,进而提高对焦的准确性。
操作1040,按照相位信息输出模式,输出与像素阵列对应的相位阵列;其中,相位阵列包括像素阵列中目标像素对应的相位信息。
在根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式之后,就可以按照相位信息输出模式,输出与像素阵列对应的相位信息。具体的,在输出与像素阵列对应的相位信息时,可以以相位阵列的形式进行输出。其中,相位阵列包括像素阵列对应的相位信息。
具体的,在当前拍摄场景不同的光线强度下,按照与该光线强度适配的相位信息输出模式,将同一像素阵列对应的相位信息直接输出作为该像素阵列对应的相位阵列或进行一定程度地合并,生成该像素阵列对应的相位阵列,本申请对此不做限定。
操作1060,基于相位阵列计算像素阵列的相位差,并根据相位差进行对焦控制。
在按照相位信息输出模式,输出与像素阵列对应的相位阵列之后,可以基于相位阵列中的相位信息计算像素阵列的相位差。假设可以获取像素阵列在第二方向的相位阵列,则基于第二方向上相邻的两个相位信息计算相位差,最终得到整个像素阵列在第二方向的相位差。假设可以获取像素阵列在第一方向的相位阵列,则基于第一方向上相邻的两个相位信息计算相位差,最终得到整个像素阵列在第一方向的相位差,且第二方向与第一方向不同。其中,第二方向可以为像素阵列的竖直方向,第一方向可以为像素阵列的水平方向,且第二方向与第一方向相互垂直。当然,可以同时获取整个像素阵列在第二方向及第一方向的相位差,还可以计算像素阵列在其他方向上的相位差,例如对角线方向(包括第一对角线方向,及与第一对角线方向垂直的第二对角线方向)等,本申请对此不做限定。
在基于所计算的相位差进行对焦控制时,由于针对当前拍摄场景对应的预览图像上某一方向的纹理特征,所采集到的平行于该方向的相位差几乎为0,显然不能基于所采集的平行于该方向的相位差进行对焦。因此,若当前拍摄场景对应的预览图像中包括第一方向的纹理特征,则基于像素阵列在第二方向的相位阵列计算像素阵列在第二方向的相位差。根据像素阵列在第二方向的相位差进行对焦控制。
例如,假设第二方向为像素阵列的竖直方向,第一方向为像素阵列的水平方向,且第二方向与第一方向相互垂直。那么,预览图像中包括第一方向的纹理特征,指的是预览图像中包括水平方向的条纹,可以是纯色的、水平方向的条纹。此时,当前拍摄场景对应的预览图像中包括水平方向的纹理特征,则基于竖直方向的相位差进行对焦控制。
若当前拍摄场景对应的预览图像中包括第二方向的纹理特征,则基于第一方向的相位差进行对焦控制。若当前拍摄场景对应的预览图像中包括第一对角线方向的纹理特征,则基于第二对角线方向的相位差进行对焦控制,反之同理。如此,针对不同方向的纹理特征,才能够准确地采集到相位差,进而准确地对焦。
本申请实施例中,根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息 输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同。按照相位信息输出模式,输出与像素阵列对应的相位阵列;其中,相位阵列包括像素阵列对应的相位信息。基于相位阵列计算像素阵列的相位差,并根据相位差进行对焦控制。
在当前拍摄场景的不同光线强度下,所能够采集到的原始相位信息的准确性不同。因此,可以根据当前拍摄场景的光线强度,针对同一像素阵列采用不同的相位信息输出模式,基于原始相位信息输出不同尺寸的相位阵列。由于不同尺寸的相位阵列的信噪比是不同的,因此,可以提高在不同的光线强度下所输出的相位信息的准确性,进而提高对焦控制的准确性。
在前一个实施例中,如图11所示,操作1040,按照相位信息输出模式,输出与像素阵列对应的相位阵列,包括:
操作1042,按照相位信息输出模式,从像素阵列中的彩色像素组及全色像素组中确定目标像素组,确定与目标像素组对应的目标微透镜,并将目标微透镜所对应的至少两个像素作为目标像素。
首先,根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式。即在不同的光线强度下所确定的相位信息输出模式是不同的,且在不同的相位信息输出模式下,所输出的相位阵列的大小也不同。
其次,按照相位信息输出模式,从像素阵列中的彩色像素组及全色像素组中确定目标像素组。因为是基于当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式,而彩色像素与全色像素在不同的光线强度下的感光度不同,因此,在不同的相位信息输出模式下,可以从像素阵列中的彩色像素组及全色像素组中确定目标像素组。
具体的,在不同的相位信息输出模式下,可以从像素阵列中的彩色像素组及全色像素组中筛选出全部或部分彩色像素组作为目标像素组。也可以从像素阵列中的彩色像素组及全色像素组中筛选出全部或部分全色像素组作为目标像素组。还可以从像素阵列中的彩色像素组及全色像素组中筛选出全部或部分彩色像素组及全部或部分全色像素组作为目标像素组,本申请对此不做限定。
最后,确定与目标像素组对应的目标微透镜,并将目标微透镜所对应的至少两个像素作为目标像素。
目标像素组中包括多个微透镜,其中部分微透镜对应至少两个像素,部分微透镜对应一个像素。因此,需要从对应至少两个像素的微透镜中确定目标微透镜,即可以将所有的对应至少两个像素的微透镜均确定为目标微透镜,也可以从所有的对应至少两个像素的微透镜中筛选出部分确定为目标微透镜,本申请对此不做限定。
在确定了目标微透镜之后,并将目标微透镜所对应的至少两个像素作为目标像素。即将目标微透镜所对应的至少两个像素作为目标像素,或将目标微透镜所对应的所有像素作为目标像素,本申请对此不做限定。
操作1044,针对各目标像素组,获取目标像素的相位信息。
操作1046,根据目标像素的相位信息,生成与像素阵列对应的相位阵列。
在从目标微透镜中确定了目标像素之后,针对各目标像素组,获取目标像素的相位信息。可以直接根据目标像素的相位信息,生成与像素阵列对应的相位阵列。还可以对目标像素的相位信息进行一定程度的合并,生成与像素阵列对应的相位阵列,本申请对此不做限定。
本申请实施例中,在不同的光线强度下对应不同的相位信息输出模式,且不同的相位信息输出模式下可以从像素阵列中的彩色像素组及全色像素组中确定目标像素组。从而,就可以基于彩色像素组及全色像素组中的至少一种像素组,确定目标像素组。再从目标像素组中确定目标像素,根据目标像素的相位信息,生成与像素阵列对应的相位阵列。从确定目标像素组、确定目标像素这两个维度上,来生成像素阵列对应的相位阵列。最终,提高了所生成的相位阵列的准确性。
接前一个实施例中,操作1042,确定与目标像素组对应的目标微透镜,将目标微透镜所对应的至少两个像素作为目标像素,包括:
确定与目标像素组对应的第一方向的目标微透镜;第一方向的目标微透镜对应沿第一方向排布且相邻的至少2个目标像素;
将第一方向的目标微透镜所对应的至少两个像素作为目标像素。
在按照相位信息输出模式,从像素阵列中的彩色像素组及全色像素组中确定了目标像素组之后,确定与目标像素组对应的目标微透镜。具体的,假设确定像素阵列中的全色像素组为目标像素组,则从该全色像素组对应的微透镜中确定目标微透镜。因为目标微透镜对应至少两个像素,结合图8所示,该全色像素组对应的微透镜包括4个第一微透镜2211a及1个第二微透镜2211b,其中,第一微透镜2211a分别对应2个像素,第二微透镜2211b对应1个像素。
因此,进一步从这4个第一微透镜2211a中确定第一方向的目标微透镜。其中,第一方向的目标微透镜对应沿第一方向排布且相邻的至少2个目标像素。假设第一方向为水平方向,则结合图8可知,这4个第一微透镜2211a中包括第一方向的第一微透镜2211a(图中灰色填充)及第二方向的第一微透镜2211a(图中斜线填充)。那么,确定与目标像素组对应的第一方向的目标微透镜为第一方向的第一微透镜2211a(图中灰色填充)。当然,这里的第一方向与第二方向可以互换,本申请对此不做限定。
将第一方向的第一微透镜2211a(图中灰色填充)所对应的至少两个像素作为目标像素。然后,就可以针对各目标像素组,获取目标像素在第二方向的相位信息。并根据目标像素的相位信息,生成与像素阵列对应的相位阵列。
结合图9所示,该全色像素组包括4个微透镜2211,包括1个第三微透镜2211c、2个第四微透镜2211d及1个第五微透镜2211e;其中,第三微透镜2211c分别对应4个像素,第四微透镜2211d对应2个像素,第五微透镜2211e对应1个像素。
因此,进一步从1个第三微透镜2211c、2个第四微透镜2211d中确定第一方向的目标微透镜。由于第三微透镜2211c中包括了2×2阵列排布的4个像素,所以,这里第三微透镜2211c既可以认为属于第一方向的微透镜,也可以认为属于第二方向的微透镜。那么,确定与目标像素组对应的第一方向的目标微透镜为第一方向的第一微透镜2211a(图中灰色填充)及第三微透镜2211c。
将第一方向的第一微透镜2211a(图中灰色填充)所对应的至少两个像素、第三微透镜2211c所对应的至少四个像素作为目标像素。然后,就可以针对各目标像素组,获取目标像素在第二方向的相位信息。并根据目标像素的相位信息,生成与像素阵列对应的相位阵列。
本申请实施例中,在按照相位信息输出模式,从像素阵列中的彩色像素组及全色像素组中确定了目标像素组之后,确定与目标像素组对应的目标微透镜。具体可以从与目标像素组对应的微透镜中确定第一方向的微透镜作为目标微透镜。那么,就可以将第一方向的第一微透镜所对应的至少两个像素作为目标像素。然后,获取目标像素在第二方向的相位信息。并根据目标像素的相位信息,生成与像素阵列对应的相位阵列。最终,针对各目标像素组,能够获取到像素阵列在第二方向上对应的相位阵列。
可选地,操作1042,确定与目标像素组对应的目标微透镜,将目标微透镜所对应的至少两个像素作为目标像素,还包括:
确定与目标像素组对应的第二方向的目标微透镜;第二方向的目标微透镜对应沿第二方向排布且相邻的至少2个目标像素;第二方向与第一方向相互垂直;
将第二方向的目标微透镜所对应的至少两个像素作为目标像素。
在按照相位信息输出模式,从像素阵列中的彩色像素组及全色像素组中确定了目标像素组之后,确定与目标像素组对应的目标微透镜。具体的,假设确定像素阵列中的全色像素组为目标像素组,则从该全色像素组对应的微透镜中确定目标微透镜。因为目标微透镜对应至少两个像素,结合图8所示,该全色像素组对应的微透镜包括4个第一微透镜2211a及1个第二微透镜2211b,其中,第一微透镜2211a分别对应2个像素,第二微透镜2211b对应1个像素。
因此,进一步从这4个第一微透镜2211a中确定第一方向的目标微透镜。其中,第一方向的目标微透镜对应沿第一方向排布且相邻的至少2个目标像素。其中,第二方向与第一方向相互垂直,假设第一方向为水平方向,那么第二方向为竖直方向。则结合图8可知,这4个第一微透镜2211a中包括第一方向的第一微透镜2211a(图中灰色填充)及第二方向的第一微透镜2211a(图中斜线填充)。那么,确定与目标像素组对应的第二方向的目标微透镜为第二方向的第一微透镜2211a(图中斜线填充)。当然,这里的第一方向与第二方向可以互换,本申请对此不做限定。
将第二方向的第一微透镜2211a(图中斜线填充)所对应的至少两个像素作为目标像素。然后,就 可以针对各目标像素组,获取目标像素在第二方向的相位信息。并根据目标像素的相位信息,生成与像素阵列对应的相位阵列。
结合图9所示,该全色像素组包括4个微透镜2211,包括1个第三微透镜2211c、2个第四微透镜2211d及1个第五微透镜2211e;其中,第三微透镜2211c分别对应4个像素,第四微透镜2211d对应2个像素,第五微透镜2211e对应1个像素。
因此,进一步从1个第三微透镜2211c、2个第四微透镜2211d中确定第二方向的目标微透镜。由于第三微透镜2211c中包括了2×2阵列排布的4个像素,所以,这里第三微透镜2211c既可以认为属于第一方向的微透镜,也可以认为属于第二方向的微透镜。那么,确定与目标像素组对应的第二方向的目标微透镜为第二方向的第一微透镜2211a(图中斜线填充)及第三微透镜2211c。
将第二方向的第一微透镜2211a(图中斜线填充)所对应的至少两个像素、第三微透镜2211c所对应的至少四个像素作为目标像素。然后,就可以针对各目标像素组,获取目标像素在第一方向的相位信息。并根据目标像素的相位信息,生成与像素阵列对应的相位阵列。
本申请实施例中,在按照相位信息输出模式,从像素阵列中的彩色像素组及全色像素组中确定了目标像素组之后,确定与目标像素组对应的目标微透镜。具体可以从与目标像素组对应的微透镜中确定第二方向的微透镜作为目标微透镜。那么,就可以将第二方向的第一微透镜所对应的至少两个像素作为目标像素。然后,获取目标像素在第一方向的相位信息。并根据目标像素的相位信息,生成与像素阵列对应的相位阵列。最终,针对各目标像素组,能够获取到像素阵列在第一方向上对应的相位阵列。
可选地,操作1042,确定与目标像素组对应的目标微透镜,将目标微透镜所对应的至少两个像素作为目标像素,包括:
确定与目标像素组对应的第一方向的目标微透镜及第二方向的目标微透镜;第一方向的目标微透镜对应沿第一方向排布且相邻的至少2个目标像素;第二方向的目标微透镜对应沿第二方向排布且相邻的至少2个目标像素;所述第二方向与所述第一方向相互垂直;
将第一方向的目标微透镜所对应的至少两个像素及第二方向的目标微透镜所对应的至少两个像素,作为目标像素。
在按照相位信息输出模式,从像素阵列中的彩色像素组及全色像素组中确定了目标像素组之后,确定与目标像素组对应的目标微透镜。具体的,假设确定像素阵列中的全色像素组为目标像素组,则从该全色像素组对应的微透镜中确定目标微透镜。因为目标微透镜对应至少两个像素,结合图8所示,该全色像素组对应的微透镜包括4个第一微透镜2211a及1个第二微透镜2211b,其中,第一微透镜2211a分别对应2个像素,第二微透镜2211b对应1个像素。
因此,进一步从这4个第一微透镜2211a中确定第一方向的目标微透镜及第二方向的目标微透镜。其中,第一方向的目标微透镜对应沿第一方向排布且相邻的至少2个目标像素,第一方向的目标微透镜对应沿第一方向排布且相邻的至少2个目标像素。假设第一方向为水平方向,则结合图8可知,这4个第一微透镜2211a中包括第一方向的第一微透镜2211a(图中灰色填充)及第二方向的第一微透镜2211a(图中斜线填充)。那么,确定与目标像素组对应的第一方向的目标微透镜为第一方向的第一微透镜2211a(图中灰色填充),确定与目标像素组对应的第二方向的目标微透镜为第二方向的第一微透镜2211a(图中斜线填充)。当然,这里的第一方向与第二方向可以互换,本申请对此不做限定。
将第一方向的第一微透镜2211a(图中灰色填充)、第二方向的第一微透镜2211a(图中斜线填充)所对应的至少两个像素作为目标像素。然后,就可以针对各目标像素组,获取目标像素在第一方向的相位信息、第二方向的相位信息。并根据目标像素的相位信息,生成与像素阵列对应的相位阵列。
结合图9所示,该全色像素组包括4个微透镜2211,包括1个第三微透镜2211c、2个第四微透镜2211d及1个第五微透镜2211e;其中,第三微透镜2211c分别对应4个像素,第四微透镜2211d对应2个像素,第五微透镜2211e对应1个像素。
因此,进一步从1个第三微透镜2211c、2个第四微透镜2211d中确定第一方向的目标微透镜及第二方向的目标微透镜。由于第三微透镜2211c中包括了2×2阵列排布的4个像素,所以,这里第三微透镜2211c既可以认为属于第一方向的微透镜,也可以认为属于第二方向的微透镜。那么,确定与目标 像素组对应的第一方向的目标微透镜及第二方向的目标微透镜,为第一方向的第一微透镜2211a(图中灰色填充)、第二方向的第一微透镜2211a(图中斜线填充)及第三微透镜2211c。
将第一方向的第一微透镜2211a(图中灰色填充)、第二方向的第一微透镜2211a(图中斜线填充)所对应的至少两个像素、第三微透镜2211c所对应的至少四个像素作为目标像素。然后,就可以针对各目标像素组,获取目标像素在第一方向、第二方向的相位信息。并根据目标像素的相位信息,生成与像素阵列对应的相位阵列。
本申请实施例中,在按照相位信息输出模式,从像素阵列中的彩色像素组及全色像素组中确定了目标像素组之后,确定与目标像素组对应的目标微透镜。具体可以从与目标像素组对应的微透镜中确定第一方向的微透镜作为目标微透镜。那么,就可以将第一方向的第一微透镜所对应的至少两个像素作为目标像素。然后,获取目标像素在第二方向的相位信息。也可以将第二方向的第一微透镜所对应的至少两个像素作为目标像素。然后,获取目标像素在第一方向的相位信息。并根据目标像素的相位信息,生成与像素阵列对应的相位阵列。最终,针对各目标像素组,能够获取到像素阵列在第一方向、第二方向两个方向上对应的相位阵列。从而,能够实现计算出两个方向的相位差,进而在两个方向上实现对焦。
在前一个实施例中,描述了根据当前拍摄场景的光线强度,针对同一像素阵列采用不同的相位信息输出模式,输出与该像素阵列对应的相位阵列,并基于相位阵列计算像素阵列的相位差,并根据相位差进行对焦控制。本实施例中,如图12所示,详细说明操作1020,根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式的具体实现操作,包括:
操作1022,确定当前拍摄场景的光线强度所属的目标光线强度范围;其中,不同的光线强度范围对应不同的相位信息输出模式;
操作1024,根据目标光线强度范围,确定与当前拍摄场景的光线强度适配的相位信息输出模式。
具体的,可以基于光线强度的不同预设阈值,将光线强度按照大小顺序划分为不同的光线强度范围。其中,可以根据曝光参数及像素阵列中像素的尺寸来确定光线强度的预设阈值。其中,曝光参数包括快门速度、镜头光圈大小及感光度(ISO,light sensibility ordinance)。
然后,为不同的光线强度范围设置不同的相位信息输出模式。具体的,按照光线强度范围中光线强度的大小顺序,为不同的光线强度范围所设置的相位信息输出模式所输出的相位阵列的大小依次减小。
判断当前拍摄场景的光线强度落入哪个光线强度范围,则将光线强度范围作为当前拍摄场景的光线强度所属的目标光线强度范围。将该目标光线强度范围对应的相位信息输出模式,作为与当前拍摄场景的光线强度适配的相位信息输出模式。
本申请实施例中,在根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式时,由于不同的光线强度范围对应不同的相位信息输出模式,因此,先确定当前拍摄场景的光线强度所属的目标光线强度范围。再根据目标光线强度范围,确定与当前拍摄场景的光线强度适配的相位信息输出模式。预先对不同的光线强度范围分别设置不同的相位信息输出模式,且每种相位信息输出模式下所输出的相位阵列的大小不同。因此,就可以基于当前拍摄场景的光线强度,对像素阵列进行更加精细化地计算相位信息,以实现更加准确地对焦。
接前一个实施例,进一步描述了相位信息输出模式包括全尺寸输出模式及第一尺寸输出模式,且全尺寸输出模式下的相位阵列的大小大于第一尺寸输出模式下相位阵列的大小。那么,如图13所示,操作1024,根据目标光线强度范围,确定与当前拍摄场景的光线强度适配的相位信息输出模式,包括:
操作1024a,若当前拍摄场景的光线强度大于第一预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为全尺寸输出模式;
操作1024b,若当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为第一尺寸输出模式。
具体的,若其中一个光线强度范围为大于第一预设阈值的范围,该光线强度范围对应的相位信息输出模式为全尺寸输出模式。则若判断当前拍摄场景的光线强度大于第一预设阈值,则当前拍摄场景的光线强度落入了该光线强度范围。即确定与当前拍摄场景的光线强度适配的相位信息输出模式为全尺寸输出模式。其中,采用全尺寸输出模式输出相位阵列,即为将像素阵列的原始相位信息全部输出,生成该 像素阵列的相位阵列。
若其中一个光线强度范围为大于第二预设阈值,且小于或等于第一预设阈值的范围,该光线强度范围对应的相位信息输出模式为第一尺寸输出模式。则若判断当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值,则当前拍摄场景的光线强度落入了该光线强度范围。即确定与当前拍摄场景的光线强度适配的相位信息输出模式为第一尺寸输出模式。其中,采用第一尺寸输出模式输出相位阵列,即为将像素阵列的原始相位信息进行合并后输出,生成该像素阵列的相位阵列。
本申请实施例中,由于全尺寸输出模式下的相位阵列的大小大于第一尺寸输出模式下相位阵列的大小,则若当前拍摄场景的光线强度大于第一预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为全尺寸输出模式。若当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为第一尺寸输出模式。即若当前拍摄场景的光线强度较大,则采用全尺寸输出模式输出与像素阵列尺寸相同的相位阵列,而若当前拍摄场景的光线强度次之,则采用第一尺寸输出模式输出比像素阵列尺寸较小的相位阵列。即在当前拍摄场景的光线强度次之的情况下,通过缩小相位阵列来提高相位信息的信噪比。
在前一个实施例中,描述了像素阵列可以是RGBW像素阵列,包括多个最小重复单元241,最小重复单元241包括多个像素组242,多个像素组242包括全色像素组243和彩色像素组244。每个全色像素组243中包括9个全色像素2431,每个彩色像素组244中包括9个彩色像素2441。每个全色像素2431包括阵列排布的2个子像素,每个彩色像素2441包括阵列排布的2个子像素。
本实施例中,在像素阵列是RGBW像素阵列的情况下,详细说明若相位信息输出模式为全尺寸输出模式,则从像素阵列中的彩色像素组及全色像素组中作为目标像素组,包括:
将像素阵列中的彩色像素组作为目标像素组;或将所述像素阵列中的彩色像素组和全色像素组作为目标像素组;其中,一个所述目标像素组对应一个像素组;
相应的,
根据目标像素的相位信息,生成与像素阵列对应的相位阵列,包括:
根据目标像素的相位信息,生成与像素阵列对应的全尺寸相位阵列。
本实施例为在当前拍摄场景的光线强度大于第一预设阈值的情况下,按照全尺寸输出模式,输出与像素阵列对应的相位阵列的具体实现操作。其中,第一预设阈值可以为2000lux,本申请对此不做限定。即此时处于光线强度大于2000lux的环境中。结合图11所示,其中,首先从像素阵列中确定彩色像素组作为用于计算相位信息的目标像素组。因为在当前拍摄场景的光线强度大于第一预设阈值的情况下,即在光线充足的场景下,由于全色像素灵敏度高,在光线充足的场景下容易饱和,而饱和后将得不到正确的相位信息,所以此时可以使用彩色像素组的相位信息来实现相位对焦(PDAF)。当然,还可以将所述像素阵列中的彩色像素组和全色像素组作为目标像素组。
具体的,若使用彩色像素组的相位信息来实现相位对焦(PDAF),则可以使用像素阵列中的部分彩色像素组的相位信息来实现相位对焦,例如,采用红色像素组、绿色像素组及蓝色像素组中的至少一种像素组的相位信息来实现相位对焦。还可以是使用部分彩色像素组中的部分像素来实现相位对焦,例如,采用红色像素组中的部分像素的相位信息来实现相位对焦,本申请对此不做限定。由于此时只使用彩色像素组的相位信息来进行相位对焦,所以减小了所输出的相位信息的数据量,进而提高了相位对焦的效率。
其次,从目标像素组对应的微透镜中确定目标微透镜,并将目标微透镜所对应的至少两个像素作为目标像素。
具体的,可以确定与目标像素组对应的第一方向的目标微透镜;第一方向的目标微透镜对应沿第一方向排布且相邻的至少2个目标像素。将第一方向的目标微透镜所对应的至少两个像素作为目标像素。还可以确定与目标像素组对应的第二方向的目标微透镜;第二方向的目标微透镜对应沿第二方向排布且相邻的至少2个目标像素;第二方向与第一方向相互垂直。将第二方向的目标微透镜所对应的至少两个像素作为目标像素。还可以确定与目标像素组对应的第一方向的目标微透镜及第二方向的目标微透镜;第一方向的目标微透镜对应沿第一方向排布且相邻的至少2个目标像素;第二方向的目标微透镜对应沿 第二方向排布且相邻的至少2个目标像素。将第一方向的目标微透镜所对应的至少两个像素及第二方向的目标微透镜所对应的至少两个像素,作为目标像素。
再次,针对各目标像素组,获取目标像素组中每个目标像素的相位信息。其中,每个像素对应一个感光元件。在本申请实施例中,假设将该像素阵列中的所有彩色像素组都作为目标像素组,针对各彩色像素组,获取该彩色像素组中每个目标像素的相位信息。最后,将所有的目标像素的相位信息输出,作为与像素阵列对应的全尺寸相位阵列。
结合图14所示,一个12×12像素阵列可以包括2个红色像素组、4个绿色像素组2个蓝色像素组以及8个全色像素组。这里,结合图7及图8所示,可以假设a表示绿色,b表示红色,c表示蓝色,w表示全色。并假设将该像素阵列中的所有彩色像素组都作为目标像素组,则针对该像素阵列中所包括的2个红色像素组、4个绿色像素组及2个蓝色像素组,依次计算各像素组的相位信息。例如,针对红色像素组244计算该红色像素组的相位信息,红色像素组包括按照3×3阵列排布的9个红色像素,依次编号为红色像素1、红色像素2、红色像素3、红色像素4、红色像素5、红色像素6、红色像素7、红色像素8、红色像素9。其中,每个像素对应一个感光元件。即红色像素1的相位信息为L1、红色像素2的相位信息为R1;红色像素8的相位信息为L2、红色像素9的相位信息为L2;红色像素3的相位信息为U1、红色像素6的相位信息为D1;红色像素4的相位信息为U1、红色像素7的相位信息为D1。其中,处于红色像素组正中心的像素5对应一个微透镜,因此,未得到相位信息。
因此,若确定与目标像素组对应的第一方向的目标微透镜,且将第一方向的目标微透镜所对应的至少两个像素作为目标像素。假设第一方向为水平方向,则将L1、R1、L2、R2依次输出,得到该红色像素组的相位信息。
如此,依次对该像素阵列所包括的其他1个红色像素组、4个绿色像素组2个蓝色像素组均进行上述处理,得到该像素阵列的相位信息L3、R3、L4、R4、L5、R5、L6、R6、L7、R7、L8、R8、L9、R9、L10、R10、L11、R11、L12、R12、L13、R13、L14、R14、L15、R15、L16、R16。
最后,将所有的目标像素的相位信息输出,作为与该像素阵列对应的全尺寸相位阵列。即由L1、R1,L2、R2,L3、R3,L4、R4,L5、R5,L6、R6,L7、R7,L8、R8,L9、R9、L10、R10、L11、R11、L12、R12、L13、R13、L14、R14、L15、R15、L16、R16,依次排列生成了全尺寸相位阵列。且该全尺寸相位阵列的大小相当于阵列排布的4×8个像素的大小。这里,像素的大小指的是一个像素的面积大小,该面积大小与像素的长与宽相关。
其中,像素是数码相机感光器件(CCD或CMOS)上的最小感光单位。其中,CCD是电荷耦合器件(charge coupled device)的简称,CMOS(Complementary Metal-Oxide-Semiconductor),其可以解释为互补金属氧化物半导体。一般情况下,像素没有固定的大小,像素的大小与显示屏的尺寸以及分辨率相关。例如,显示屏的尺寸为4.5英寸,且显示屏的分辨率为1280×720,则显示屏的长为99.6mm,宽为56mm,则一个像素的长为99.6mm/1280=0.0778mm,宽也为56mm/720=0.0778mm。在这个例子中,阵列排布的4×8个像素的大小为:长为4×0.0778mm,宽为8×0.0778mm。当然,本申请对此不做限定。那么,该全尺寸相位阵列的大小的长为4×0.0778mm,宽为8×0.0778mm。当然,在其他实施例中,像素也可以不是长宽相等的矩形,像素还可以其他异性结构,本申请对此不做限定。
同理,若确定与目标像素组对应的第二方向的目标微透镜,且将第二方向的目标微透镜所对应的至少两个像素作为目标像素。假设第二方向为竖直方向,则将U1、D1、U2、D2依次输出,得到该红色像素组的相位信息。
如此,依次对该像素阵列所包括的其他1个红色像素组、4个绿色像素组2个蓝色像素组均进行上述处理,得到该像素阵列的相位信息U3、D3、U4、D4、U5、D5、U6、D6、U7、D7、U8、D8、U9、D9、U10、D10、U11、D11、U12、D12、U13、D13、U14、D14、U15、D15、U16、D16。
最后,将所有的目标像素的相位信息输出,作为与该像素阵列对应的全尺寸相位阵列。即由U1、D1,U2、D2,U3、D3、U4、D4、U5、D5、U6、D6、U7、D7、U8、D8、U9、D9、U10、D10、U11、D11、U12、D12、U13、D13、U14、D14、U15、D15、U16、D16,依次排列生成了全尺寸相位阵列。且该全尺寸相位阵列的大小相当于阵列排布的4×8个像素的大小。
同理,若确定与目标像素组对应的第一方向的目标微透镜及第二方向的目标微透镜,且将第一方向的目标微透镜及第二方向的目标微透镜所对应的至少两个像素作为目标像素。假设第一方向为水平方向、第二方向为竖直方向,则将L1、R1、L2、R2、U1、D1、U2、D2依次输出,得到该红色像素组的相位信息。那么,依次对该像素阵列所包括的其他1个红色像素组、4个绿色像素组2个蓝色像素组均进行上述处理,得到该像素阵列的相位信息。最后,将所有的目标像素的相位信息输出,作为与该像素阵列对应的全尺寸相位阵列。且该全尺寸相位阵列的大小相当于阵列排布的8×8个像素的大小。
此时,可以将相位阵列输入ISP(Image Signal Processing),通过ISP基于相位阵列计算像素阵列的相位差。然后,基于相位差计算出离焦距离,并计算出于该离焦距离对应的DAC code值。最后,通过马达(VCM)的driver IC将code值转换为驱动电流,并由马达驱动镜头移动到清晰位置。从而,根据相位差实现了对焦控制。
本申请实施例中,在当前拍摄场景的光线强度大于第一预设阈值的情况下,按照全尺寸输出模式,输出与像素阵列对应的相位阵列。将像素阵列中的彩色像素组作为目标像素组,针对各目标像素组,获取目标像素组中每个像素的相位信息。且可以从第一方向的微透镜及第二方向的微透镜中的至少一种微透镜中获取目标像素的相位信息,因此,可以获取到多种相位信息。最后,根据目标像素的相位信息,生成与像素阵列对应的全尺寸相位阵列。由于此时只使用彩色像素组的相位信息来进行相位对焦,所以减小了所输出的相位信息的数据量,进而提高了相位对焦的效率。
在一个实施例中,详细说明若相位信息输出模式为第一尺寸输出模式,则从像素阵列中的彩色像素组及全色像素组中作为目标像素组,包括:
将像素阵列中的彩色像素组、全色像素组中的至少一种作为目标像素组;所述目标像素组对应一个像素组;
相应的,
根据目标像素的相位信息,生成与像素阵列对应的相位阵列,包括:
将所述目标像素组内的目标像素的相位信息进行合并,生成多组中间合并相位信息;
根据多组中间合并相位信息,生成与像素阵列对应的第一尺寸相位阵列。
本实施例为在当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值的情况下,按照第一尺寸输出模式,输出与像素阵列对应的相位阵列的具体实现操作。其中,第二预设阈值可以为500lux,本申请对此不做限定。即此时处于光线强度大于500lux,且小于或等于2000lux的环境中。结合图15所示,其中,首先从像素阵列中确定彩色像素组、全色像素组中的至少一种作为用于计算相位信息的目标像素组。因为在当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值的情况下,即在光线稍弱的场景下,全色像素在光线稍弱的场景下不容易饱和,所以此时可以使用全色像素组的相位信息来实现相位对焦(PDAF)。且在光线稍弱的场景下,彩色像素在光线稍弱的场景下也能够获取到准确的相位信息,所以此时也可以使用彩色像素组的相位信息来实现相位对焦(PDAF)。因此,在按照第一尺寸输出模式,输出与像素阵列对应的相位阵列时,可以选择彩色像素组作为目标像素组,也可以选择全色像素组作为目标像素组,还可以选择彩色像素组及全色像素组作为目标像素组,本申请对此不做限定。
具体的,针对目标像素组为彩色像素组的情况,可以使用像素阵列中的部分彩色像素组的相位信息来实现相位对焦,还可以是使用部分彩色像素组中的部分彩色像素来实现相位对焦,本申请对此不做限定。同理,针对目标像素组为全色像素组的情况,可以使用像素阵列中的部分全色像素组的相位信息来实现相位对焦,还可以是使用部分全色像素组中的部分全色像素来实现相位对焦,本申请对此不做限定。同理,针对目标像素组为彩色像素组及全色像素组的情况,可以使用像素阵列中的部分全色像素组、部分彩色像素组的相位信息来实现相位对焦,还可以是使用部分全色像素组中的部分全色像素、部分彩色像素组中的部分彩色像素来实现相位对焦,本申请对此不做限定。
由于此时可以只使用部分像素组的相位信息来进行相位对焦,或只使用了部分像素组中的部分像素的相位信息进行相位对焦,所以减小了所输出的相位信息的数据量,进而提高了相位对焦的效率。
其次,从目标像素组对应的微透镜中确定目标微透镜,并将目标微透镜所对应的至少两个像素作为 目标像素。
具体的,可以确定与目标像素组对应的第一方向的目标微透镜;第一方向的目标微透镜对应沿第一方向排布且相邻的至少2个目标像素。将第一方向的目标微透镜所对应的至少两个像素作为目标像素。还可以确定与目标像素组对应的第二方向的目标微透镜;第二方向的目标微透镜对应沿第二方向排布且相邻的至少2个目标像素;第二方向与第一方向相互垂直。将第二方向的目标微透镜所对应的至少两个像素作为目标像素。还可以确定与目标像素组对应的第一方向的目标微透镜及第二方向的目标微透镜;第一方向的目标微透镜对应沿第一方向排布且相邻的至少2个目标像素;第二方向的目标微透镜对应沿第二方向排布且相邻的至少2个目标像素。将第一方向的目标微透镜所对应的至少两个像素及第二方向的目标微透镜所对应的至少两个像素,作为目标像素。
再次,针对各目标像素组,获取目标像素组中每个目标像素的相位信息。其中,每个像素对应一个感光元件。在本申请实施例中,一方面,假设将该像素阵列中的所有全色像素组都作为目标像素组,针对各全色像素组,获取该全色像素组中每个目标像素的相位信息。将目标像素的相位信息进行合并,生成多组中间合并相位信息;最后,根据多组中间合并相位信息,生成与像素阵列对应的第一尺寸相位阵列。
结合图15所示,一个像素阵列可以包括2个红色像素组、4个绿色像素组、2个蓝色像素组以及8个全色像素组。假设将该像素阵列中的所有全色像素组都作为目标像素组,则针对该像素阵列中所包括的8个全色像素组,依次计算各像素组的相位信息。例如,针对全色像素组计算该全色像素组的相位信息,全色像素组包括按照3×3阵列排布的9个全色像素,依次编号为全色像素1、全色像素2、全色像素3、全色像素4、全色像素5、全色像素6、全色像素7、全色像素8、全色像素9。其中,每个像素对应一个感光元件。即全色像素1的相位信息为L1、全色像素2的相位信息为R1;全色像素8的相位信息为L2、全色像素9的相位信息为L2;全色像素3的相位信息为U1、全色像素6的相位信息为D1;全色像素4的相位信息为U1、全色像素7的相位信息为D1。其中,处于全色像素组正中心的像素5对应一个微透镜,因此,未得到相位信息。
因此,若确定与目标像素组对应的第一方向的目标微透镜,且将第一方向的目标微透镜所对应的至少两个像素作为目标像素。假设第一方向为水平方向,则将L1、R1、L2、R2依次输出,再将目标像素的相位信息进行合并,生成多组中间合并相位信息。例如,将L1、L2进行合并,生成第一中间合并相位信息L;将R1、R2进行合并,生成第一中间合并相位信息R。
如此,依次对该像素阵列所包括的其他全色像素组均进行上述处理,得到该像素阵列的相位信息L3、R3、L4、R4、L5、R5、L6、R6、L7、R7、L8、R8、L9、R9、L10、R10、L11、R11、L12、R12、L13、R13、L14、R14、L15、R15、L16、R16。再将L3、L4进行合并,生成第一中间合并相位信息L;将R3、R4进行合并,生成第一中间合并相位信息R。将L5、L6进行合并,生成第一中间合并相位信息L;将R5、R6进行合并,生成第一中间合并相位信息R。将L7、L8进行合并,生成第一中间合并相位信息L;将R7、R8进行合并,生成第一中间合并相位信息R。将L9、L10进行合并,生成第一中间合并相位信息L;将R9、R10进行合并,生成第一中间合并相位信息R。将L11、L12进行合并,生成第一中间合并相位信息L;将R11、R12进行合并,生成第一中间合并相位信息R。将L13、L14进行合并,生成第一中间合并相位信息L;将R13、R14进行合并,生成第一中间合并相位信息R。将L15、L16进行合并,生成第一中间合并相位信息L;将R15、R16进行合并,生成第一中间合并相位信息R。
最后,根据多组中间合并相位信息,生成与像素阵列对应的第一尺寸相位阵列。如此,由8组第一中间合并相位信息L、第一中间合并相位信息R依次排列生成了第一尺寸相位阵列。且该第一尺寸相位阵列的大小相当于阵列排布的4×4个像素的大小。
当然,还可以是对8组第一中间合并相位信息L、第一中间合并相位信息R进行合并处理或变换处理,生成与像素阵列对应的第一尺寸相位阵列。这里的变换处理,可以是对8组第一中间合并相位信息L、第一中间合并相位信息R进行校正等处理,本申请对此不做限定。在对8组第一中间合并相位信息L、第一中间合并相位信息R进行合并处理时,可以将相当于4×4个像素大小的相位阵列,合并为4×2个像素大小的相位阵列。当然,本申请并不对合并后的相位阵列的具体大小做限定。这里,像素的 大小指的是一个像素的面积大小,该面积大小与像素的长与宽相关。
其中,像素是数码相机感光器件(CCD或CMOS)上的最小感光单位。一般情况下,像素没有固定的大小,像素的大小与显示屏的尺寸以及分辨率相关。例如,显示屏的尺寸为4.5英寸,且显示屏的分辨率为1280×720,则显示屏的长为99.6mm,宽为56mm,则一个像素的长为99.6mm/1280=0.0778mm,宽也为56mm/720=0.0778mm。在这个例子中,阵列排布的4×4个像素的大小为:长为4×0.0778mm,宽为4×0.0778mm。当然,本申请对此不做限定。那么,该全尺寸相位阵列的大小的长为4×0.0778mm,宽为4×0.0778mm。当然,在其他实施例中,像素也可以不是长宽相等的矩形,像素还可以其他异性结构,本申请对此不做限定。
同理,若确定与目标像素组对应的第二方向的目标微透镜,且将第二方向的目标微透镜所对应的至少两个像素作为目标像素。假设第二方向为竖直方向,则将U1、D1、U2、D2依次输出,再将目标像素的相位信息进行合并,生成多组中间合并相位信息。例如,将U1、U2进行合并,生成第一中间合并相位信息U;将D1、D2进行合并,生成第一中间合并相位信息D。
如此,依次对该像素阵列所包括的其他全色像素组均进行上述处理,得到该像素阵列的相位信息U3、D3、U4、D4、U5、D5、U6、D6、U7、D7、U8、D8、U9、D9、U10、D10、U11、D11、U12、D12、U13、D13、U14、D14、U15、D15、U16、D16。再将U3、U4进行合并,生成第一中间合并相位信息U;将D3、D4进行合并,生成第一中间合并相位信息D。将U5、U6进行合并,生成第一中间合并相位信息U;将D5、D6进行合并,生成第一中间合并相位信息D。将U7、U8进行合并,生成第一中间合并相位信息U;将D7、D8进行合并,生成第一中间合并相位信息D。将U9、U10进行合并,生成第一中间合并相位信息U;将D9、D10进行合并,生成第一中间合并相位信息D。将U11、U12进行合并,生成第一中间合并相位信息U;将D11、D12进行合并,生成第一中间合并相位信息D。将U13、U14进行合并,生成第一中间合并相位信息U;将D13、D14进行合并,生成第一中间合并相位信息D。将U15、U16进行合并,生成第一中间合并相位信息U;将D15、D16进行合并,生成第一中间合并相位信息D。
最后,根据多组中间合并相位信息,生成与像素阵列对应的第一尺寸相位阵列。如此,由8组第一中间合并相位信息U、第一中间合并相位信息D依次排列生成了第一尺寸相位阵列。且该第一尺寸相位阵列的大小相当于阵列排布的4×4个像素的大小。
同理,若确定与目标像素组对应的第一方向的目标微透镜及第二方向的目标微透镜,且将第一方向的目标微透镜及第二方向的目标微透镜所对应的至少两个像素作为目标像素。假设第一方向为水平方向、第二方向为竖直方向,则将L1、R1、L2、R2、U1、D1、U2、D2依次输出,再将目标像素的相位信息进行合并,生成多组中间合并相位信息。例如,将L1、L2进行合并,生成第一中间合并相位信息L;将R1、R2进行合并,生成第一中间合并相位信息R。将U1、U2进行合并,生成第一中间合并相位信息U;将D1、D2进行合并,生成第一中间合并相位信息D。
那么,依次对该像素阵列所包括的其他全色像素组均进行上述处理,得到该像素阵列的相位信息。最后,将目标像素的相位信息进行合并,生成多组中间合并相位信息,将多组中间合并相位信息作为与该像素阵列对应的第一尺寸相位阵列。且第一全尺寸相位阵列的大小相当于阵列排布的8×8个像素的大小。
另一方面,假设将该像素阵列中的所有彩色像素组都作为目标像素组,针对各彩色像素组,获取该各彩色像素组中每个目标像素的相位信息。将目标像素的相位信息进行合并,生成多组中间合并相位信息;最后,根据多组中间合并相位信息,生成与像素阵列对应的第一尺寸相位阵列。具体的计算过程,同将该像素阵列中的所有全色像素组都作为目标像素组时,计算第一尺寸相位阵列的过程,在此不再赘述。
另一方面,假设将该像素阵列中的所有全色像素组及彩色像素组都作为目标像素组,针对各全色像素组及各彩色像素组,获取该全色像素组及各彩色像素组中每个目标像素的相位信息。将目标像素的相位信息进行合并,生成多组中间合并相位信息;最后,根据多组中间合并相位信息,生成与像素阵列对应的第一尺寸相位阵列。具体的计算过程,同将该像素阵列中的所有全色像素组都作为目标像素组时, 计算第一尺寸相位阵列的过程,在此不再赘述。
此时,可以将相位阵列输入ISP,通过ISP基于相位阵列计算像素阵列的相位差。然后,基于相位差计算出离焦距离,并计算出于该离焦距离对应的DAC code值。最后,通过马达(VCM)的driver IC将code值转换为驱动电流,并由马达驱动镜头移动到清晰位置。从而,根据相位差实现了对焦控制。
本申请实施例中,在当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值的情况下,因为此时的光线强度稍弱,则通过彩色像素组或全色像素组所采集到的相位信息不是很准确,部分彩色像素组或部分全色像素组可能未采集到相位信息。因此,将像素阵列中的彩色像素组或全色像素组中的至少一种作为目标像素组,且针对各目标像素组,获取目标像素组中每个像素的相位信息。且可以从第一方向的微透镜及第二方向的微透镜中的至少一种微透镜中获取目标像素的相位信息,因此,可以获取到多种相位信息。
将目标像素的相位信息进行合并,生成多组中间合并相位信息。根据多组中间合并相位信息,生成与像素阵列对应的第一尺寸相位阵列。采用第一尺寸输出模式将目标像素的相位信息进行一定程度的合并,提高所输出的相位信息的准确性,提高相位信息的信噪比。最终,基于与像素阵列对应的第一尺寸相位阵列进行相位对焦,就可以提高对焦的准确性。
接前一个实施例,若目标像素为第一方向的目标微透镜所对应的至少两个像素,则将目标像素的相位信息进行合并,生成多组中间合并相位信息,包括:
获取目标像素组内的至少两个第一方向的目标微透镜,并从至少两个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第一方向上处于同一方位;
将处于同一位置的目标像素的相位信息进行合并,生成多组第一合并相位信息。
结合图15所示,若确定与目标像素组对应的第一方向的目标微透镜,且将第一方向的目标微透镜所对应的至少两个像素作为目标像素。假设第一方向为水平方向,则将L1、R1、L2、R2依次输出,再将目标像素的相位信息进行合并,生成多组中间合并相位信息。
具体的,结合图8中(a)所示,获取目标像素组内的至少两个第一方向的目标微透镜(图中灰色填充),并从至少两个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素。其中,处于同一位置的目标像素在对应的所述目标微透镜中的所述第一方向上处于同一方位。例如,从第一方向的目标微透镜2211a(图中灰色填充)对应的像素1、像素2、像素8、像素9中,确定处于同一位置的目标像素。即确定处于第一方向的目标微透镜2211a(图中灰色填充)的左侧的像素为目标像素,确定处于第一方向的目标微透镜2211a(图中灰色填充)的右侧的像素为目标像素。即将L1、L2进行合并,生成第一中间合并相位信息L;将R1、R2进行合并,生成第一中间合并相位信息R。
基于多组第一中间合并相位信息L、第一中间合并相位信息R,生成多组第一合并相位信息。
本申请实施例中,获取目标像素组内的至少两个第一方向的目标微透镜,并从至少两个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素。将处于同一位置的目标像素的相位信息进行合并,生成多组第一合并相位信息。采用第一尺寸输出模式将像素的相位信息进行一定程度的合并,提高所输出的相位信息的准确性,提高相位信息的信噪比。最终,基于与像素阵列对应的第一尺寸相位阵列进行相位对焦,就可以提高对焦的准确性。
接前一个实施例,假设第一方向为竖直方向,这里将竖直方向命名为第二方向,若目标像素为第二方向的目标微透镜所对应的至少两个像素,则将目标像素组内的目标像素的相位信息进行合并,生成多组中间合并相位信息,包括:
获取目标像素组内的至少两个第二方向的目标微透镜,并从至少两个第二方向的目标微透镜对应的像素中确定处于同一位置的目标像素;处于同一位置的目标像素在对应的所述目标微透镜中的所述第二方向上处于同一方位;
将处于同一位置的目标像素的相位信息进行合并,生成多组第二合并相位信息。
结合图15所示,若确定与目标像素组对应的第二方向的目标微透镜,且将第二方向的目标微透镜所对应的至少两个像素作为目标像素。假设第二方向为竖直方向,则将U1、D1、U2、D2依次输出, 再将目标像素的相位信息进行合并,生成多组中间合并相位信息。
具体的,结合图8中(a)所示,获取目标像素组内的至少两个第二方向的目标微透镜(图中斜线填充),并从至少两个第二方向的目标微透镜对应的像素中确定处于同一位置的目标像素。其中,处于同一位置的目标像素在对应的所述目标微透镜中的所述第二方向上处于同一方位。例如,从第二方向的目标微透镜2211a(图中斜线填充)对应的像素3、像素6、像素4、像素7中,确定处于同一位置的目标像素。即确定处于第二方向的目标微透镜2211a(图中斜线填充)的上方的像素为目标像素,确定处于第二方向的目标微透镜2211a(图中斜线填充)的下方的像素为目标像素。即将U1、U2进行合并,生成第二中间合并相位信息U;将D1、D2进行合并,生成第二中间合并相位信息D。
基于多组第二中间合并相位信息U、第二中间合并相位信息D,生成多组第二合并相位信息。
本申请实施例中,获取目标像素组内的至少两个第二方向的目标微透镜,并从至少两个第二方向的目标微透镜对应的像素中确定处于同一位置的目标像素。将处于同一位置的目标像素的相位信息进行合并,生成多组第二合并相位信息。采用第一尺寸输出模式将像素的相位信息进行一定程度的合并,提高所输出的相位信息的准确性,提高相位信息的信噪比。最终,基于与像素阵列对应的第一尺寸相位阵列进行相位对焦,就可以提高对焦的准确性。
接前一个实施例,若目标像素为第一方向的目标微透镜所对应的至少两个像素及第二方向的目标微透镜所对应的至少两个像素,则将目标像素组内的目标像素的相位信息进行合并,生成多组中间合并相位信息,包括:
获取目标像素组内的至少两个第一方向的目标微透镜,并从至少两个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第一方向上处于同一方位;
将处于同一位置的目标像素的相位信息进行合并,生成多组第一合并相位信息;
获取目标像素组内的至少两个第二方向的目标微透镜,并从至少两个第二方向的目标微透镜对应的像素中确定处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第二方向上处于同一方位;
将处于同一位置的目标像素的相位信息进行合并,生成多组第二合并相位信息;
基于多组第一合并相位信息及多组第二合并相位信息,生成多组中间合并相位信息。
具体的,结合图15所示,若确定与目标像素组对应的第一方向的目标微透镜,且将第一方向的目标微透镜所对应的至少两个像素作为目标像素。假设第一方向为水平方向,则将L1、R1、L2、R2依次输出,再将目标像素的相位信息进行合并,生成多组中间合并相位信息。
具体的,结合图8中(a)所示,获取目标像素组内的至少两个第一方向的目标微透镜(图中灰色填充),并从至少两个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素。其中,所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第一方向上处于同一方位。例如,从第一方向的目标微透镜2211a(图中灰色填充)对应的像素1、像素2、像素8、像素9中,确定处于同一位置的目标像素。即确定处于第一方向的目标微透镜2211a(图中灰色填充)的左侧的像素为目标像素,确定处于第一方向的目标微透镜2211a(图中灰色填充)的右侧的像素为目标像素。即将L1、L2进行合并,生成第一中间合并相位信息L;将R1、R2进行合并,生成第一中间合并相位信息R。
基于多组第一中间合并相位信息L、第一中间合并相位信息R,生成多组第一合并相位信息。
若确定与目标像素组对应的第二方向的目标微透镜,且将第二方向的目标微透镜所对应的至少两个像素作为目标像素。假设第二方向为竖直方向,则将U1、D1、U2、D2依次输出,再将目标像素的相位信息进行合并,生成多组中间合并相位信息。
具体的,获取目标像素组内的至少两个第二方向的目标微透镜(图中斜线填充),并从至少两个第二方向的目标微透镜对应的像素中确定处于同一位置的目标像素。其中,所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第二方向上处于同一方位。例如,从第二方向的目标微透镜2211a(图中斜线填充)对应的像素3、像素6、像素4、像素7中,确定处于同一位置的目标像素。即确定处于第二方向的目标微透镜2211a(图中斜线填充)的上方的像素为目标像素,确定处于第二方向的目标微 透镜2211a(图中斜线填充)的下方的像素为目标像素。即将U1、U2进行合并,生成第二中间合并相位信息U;将D1、D2进行合并,生成第二中间合并相位信息D。
基于多组第二中间合并相位信息U、第二中间合并相位信息D,生成多组第二合并相位信息。
最后,基于多组第一合并相位信息及多组第二合并相位信息,生成多组中间合并相位信息。
本申请实施例中,获取目标像素组内的至少两个第一方向的目标微透镜及至少两个第二方向的目标微透镜,并从目标微透镜对应的像素中确定处于同一位置的目标像素。将处于同一位置的目标像素的相位信息进行合并,生成多组第一合并相位信息、第二合并相位信息。最后,基于多组第一合并相位信息及多组第二合并相位信息,生成多组中间合并相位信息。采用第一尺寸输出模式将像素的相位信息进行一定程度的合并,提高所输出的相位信息的准确性,提高相位信息的信噪比。最终,基于与像素阵列对应的第一尺寸相位阵列进行相位对焦,就可以提高对焦的准确性。
接前一个实施例,若目标像素组包括彩色像素组及全色像素组,则根据目标相位信息,生成像素阵列在第一方向的第一尺寸相位阵列,包括:
根据当前拍摄场景的光线强度确定彩色像素组对应的第一相位权重以及全色像素组对应的第二相位权重;其中,彩色像素组在不同的光线强度下所对应的第一相位权重不同,全色像素组在不同的光线强度下所对应的第二相位权重不同;
基于彩色像素组的目标相位信息及第一相位权重、全色像素组的目标相位信息及第二相位权重,生成像素阵列在第一方向的第一尺寸相位阵列。
具体的,在当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值的场景下,且所确定的目标像素组包括彩色像素组及全色像素组,那么,基于目标像素组的相位信息生成第一尺寸相位阵列时,可以考虑不同像素组之间的权重。其中,可以根据当前拍摄场景的光线强度确定彩色像素组对应的第一相位权重以及全色像素组对应的第二相位权重。具体的,当前拍摄场景的光线强度越接近第二预设阈值,则此时彩色像素组对应的第一相位权重越小,而全色像素组对应的第二相位权重越大。因为此时光线强度在大于第二预设阈值,且小于或等于第一预设阈值的场景下偏小,全色像素组对应的第二相位权重越大,则所获取到的相位信息越准确。随着光线强度增大,当前拍摄场景的光线强度越接近第一预设阈值,则此时彩色像素组对应的第一相位权重越大,而全色像素组对应的第二相位权重越小。因为此时光线强度在大于第二预设阈值,且小于或等于第一预设阈值的场景下偏大,彩色像素组对应的第一相位权重越大,则所获取到的相位信息越全面、越准确。其中,彩色像素组在不同的光线强度下所对应的第一相位权重不同,全色像素组在不同的光线强度下所对应的第二相位权重不同。例如,当前拍摄场景的光线强度为2000lux时,确定彩色像素组对应的第一相位权重为40%,其中,绿色像素组的相位权重为20%,红色像素组的相位权重为10%,蓝色像素组的相位权重为10%。并确定全色像素组对应的第二相位权重为60%,本申请对此不做限定。
然后,就可以基于彩色像素组的目标相位信息及第一相位权重、全色像素组的目标相位信息及第二相位权重,生成像素阵列在第一方向的第一尺寸相位阵列。例如,针对该像素阵列,基于第一个红色像素组的目标相位信息及相位权重10%、第二个红色像素组的目标相位信息及相位权重10%,第一个蓝色像素组的目标相位信息及相位权重10%、第二个蓝色像素组的目标相位信息及相位权重10%,以及各个绿色像素组的目标相位信息及相位权重20%,还有各个全色像素组的目标相位信息及相位权重60%,共同求和计算出像素阵列在第一方向的相位信息,即得到了第一尺寸相位阵列。
本申请实施例中,在进行相位对焦时,若确定目标像素组包括彩色像素组及全色像素组,则可以基于彩色像素组的目标相位信息及其第一相位权重、全色像素组的目标相位信息及其第二相位权重,生成像素阵列的第一尺寸相位阵列。如此,基于彩色像素组及全色像素组的目标相位信息,共同生成像素阵列的第一尺寸相位阵列,可以提高相位信息的全面性。同时,在不同的光线强度下彩色像素组及全色像素组的目标相位信息的相位权重不同,如此,能够在不同的光线强度下通过调节权重大小来提高相位信息的准确性。
在前述实施例中,描述了像素阵列可以是RGBW像素阵列,包括多个最小重复单元241,最小重复单元241包括多个像素组242,多个像素组242包括全色像素组243和彩色像素组244。每个全色像 素组243中包括9个全色像素2431,每个彩色像素组244中包括9个彩色像素2441。
本实施例中,在像素阵列是RGBW像素阵列的情况下,相位信息输出模式还包括第二尺寸输出模式;其中,第一尺寸输出模式下的相位阵列的大小大于第二尺寸输出模式下相位阵列的大小;
结合图13所示,操作1024,根据目标光线强度范围,确定与当前拍摄场景的光线强度适配的相位信息输出模式,包括:
操作1024c,若当前拍摄场景的光线强度小于第二预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为第二尺寸输出模式。
具体的,若其中一个光线强度范围为小于第二预设阈值的范围,该光线强度范围对应的相位信息输出模式为第二尺寸输出模式。则若判断当前拍摄场景的光线强度小于第二预设阈值,则当前拍摄场景的光线强度落入了该光线强度范围。即确定与当前拍摄场景的光线强度适配的相位信息输出模式为第二尺寸输出模式。其中,第二预设阈值可以为500lux,本申请对此不做限定。即此时处于在黄昏或在光线强度小于500lux的环境中。
其中,采用第二尺寸输出模式输出相位阵列,即为将像素阵列的原始相位信息进行合并后输出,生成该像素阵列的相位阵列。换言之,像素阵列的尺寸大于该像素阵列的相位阵列的大小。例如,若像素阵列的尺寸为12×12,则该像素阵列中各目标像素组的相位阵列的大小是4×2,本申请中并不对该相位阵列的大小做出限定。
本申请实施例中,由于第一尺寸输出模式下的相位阵列的大小大于或等于第二尺寸输出模式下相位阵列的大小,则若当前拍摄场景的光线强度小于第二预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为第二尺寸输出模式,此时采用第二尺寸输出模式输出与像素阵列尺寸相同的相位阵列。即在当前拍摄场景的光线强度更弱的情况下,通过缩小相位阵列来提高相位信息的信噪比。
在一个实施例中,若相位信息输出模式为第二尺寸输出模式,则从像素阵列中的彩色像素组及全色像素组中作为目标像素组,包括:
将像素阵列中沿第一对角线方向相邻的两个彩色像素组作为目标像素组及沿第二对角线方向相邻的两个全色像素组作为目标像素组;
根据目标像素的相位信息,生成与像素阵列对应的相位阵列,包括:
将所述目标像素组内两个全色像素组的目标像素的相位信息进行合并,将所述目标像素组内两个彩色像素组的所述目标像素的相位信息进行合并,生成多组目标合并相位信息;
根据多组目标合并相位信息,生成与像素阵列对应的第二尺寸相位阵列。
本实施例为在当前拍摄场景的光线强度小于第二预设阈值的情况下,按照第二尺寸输出模式,输出与像素阵列对应的相位阵列的具体实现操作。其中,此时处于在黄昏或在光线强度小于500lux的环境中中。其中,首先将像素阵列中沿第一对角线方向相邻的两个彩色像素组作为目标像素组及沿第二对角线方向相邻的两个全色像素组作为目标像素组。因为在当前拍摄场景的光线强度小于第二预设阈值的情况下,即在光线较暗的场景下,全色像素在光线极暗的场景下能够捕捉到更多的光线信息,所以此时可以将像素阵列中沿第一对角线方向相邻的两个彩色像素组作为目标像素组及沿第二对角线方向相邻的两个全色像素组作为目标像素组。将所述目标像素组内两个全色像素组的目标像素的相位信息进行合并,将所述目标像素组内两个彩色像素组的所述目标像素的相位信息进行合并,生成多组目标合并相位信息。根据多组目标合并相位信息,生成与像素阵列对应的第二尺寸相位阵列。其中,将所述目标像素组内两个全色像素组的目标像素的相位信息进行合并的过程,可以参考下一实施例中将两个全色像素组的目标像素的相位信息进行合并的过程。将所述目标像素组内两个彩色像素组的所述目标像素的相位信息进行合并,也可以参考下一实施例中将两个全色像素组的目标像素的相位信息进行合并的过程。本实施中不再赘述。
在一个实施例中,若所述相位信息输出模式为所述第二尺寸输出模式,则所述从所述像素阵列中的所述彩色像素组及所述全色像素组中作为目标像素组,包括:
将所述沿第二对角线方向相邻的两个全色像素组作为目标像素组;
所述根据所述目标像素的相位信息,生成与所述像素阵列对应的相位阵列,包括:
将所述目标像素组内两个全色像素组的所述目标像素的相位信息进行合并,生成多组目标合并相位信息;
根据多组目标合并相位信息,生成与所述像素阵列对应的第二尺寸相位阵列。
本实施例为在当前拍摄场景的光线强度小于第二预设阈值的情况下,按照第二尺寸输出模式,输出与像素阵列对应的相位阵列的具体实现操作。其中,此时处于在黄昏或在光线强度小于500lux的环境中中。结合图16所示,其中,首先将像素阵列中沿第一对角线方向相邻的两个彩色像素组作为目标像素组及沿第二对角线方向相邻的两个全色像素组作为目标像素组。因为在当前拍摄场景的光线强度小于第二预设阈值的情况下,即在光线较暗的场景下,全色像素在光线极暗的场景下能够捕捉到更多的光线信息,所以此时可以使用全色像素组的相位信息来实现相位对焦(PDAF)。因此,在按照第二尺寸输出模式,输出与像素阵列对应的相位阵列时,可以选择全色像素组作为目标像素组,也可以选择彩色像素组及全色像素组作为目标像素组。
具体的,针对目标像素组为全色像素组的情况,可以使用像素阵列中的部分全色像素组的相位信息来实现相位对焦,还可以是使用部分全色像素组中的部分全色像素来实现相位对焦,本申请对此不做限定。针对目标像素组为彩色像素组及全色像素组的情况,可以使用像素阵列中的部分彩色像素组及部分全色像素组的相位信息来实现相位对焦,还可以是使用部分彩色像素组的部分彩色像素、部分全色像素组中的部分全色像素来实现相位对焦,本申请对此不做限定。
由于此时可以只使用部分像素组的相位信息来进行相位对焦,或只使用了部分像素组中的部分像素的相位信息进行相位对焦,所以减小了所输出的相位信息的数据量,进而提高了相位对焦的效率。
其次,从目标像素组对应的微透镜中确定目标微透镜,并将目标微透镜所对应的至少两个像素作为目标像素。
具体的,可以确定与目标像素组对应的第一方向的目标微透镜;第一方向的目标微透镜对应沿第一方向排布且相邻的至少2个目标像素。将第一方向的目标微透镜所对应的至少两个像素作为目标像素。还可以确定与目标像素组对应的第二方向的目标微透镜;第二方向的目标微透镜对应沿第二方向排布且相邻的至少2个目标像素;第二方向与第一方向相互垂直。将第二方向的目标微透镜所对应的至少两个像素作为目标像素。还可以确定与目标像素组对应的第一方向的目标微透镜及第二方向的目标微透镜;第一方向的目标微透镜对应沿第一方向排布且相邻的至少2个目标像素;第二方向的目标微透镜对应沿第二方向排布且相邻的至少2个目标像素。将第一方向的目标微透镜所对应的至少两个像素及第二方向的目标微透镜所对应的至少两个像素,作为目标像素。
再次,针对各目标像素组,获取目标像素组中每个目标像素的相位信息。其中,每个像素对应一个感光元件。一方面,假设将该像素阵列中沿第二对角线方向相邻的两个全色像素组都作为目标像素组,针对各目标像素组中的全色像素组,获取该全色像素组中每个目标像素的相位信息。将所述目标像素组内两个全色像素组的目标像素的相位信息进行合并,将所述目标像素组内两个彩色像素组的所述目标像素的相位信息进行合并,生成多组目标合并相位信息;最后,根据多组目标合并相位信息,生成与像素阵列对应的第二尺寸相位阵列。
结合图16所示,一个像素阵列可以包括2个红色像素组、4个绿色像素组、2个蓝色像素组以及8个全色像素组。假设将该像素阵列中沿第二对角线方向相邻的两个全色像素组都作为目标像素组,则针对该像素阵列中所包括的8个全色像素组,即4对全色像素组,依次计算各目标像素组的相位信息。例如,针对各目标像素组中的全色像素组,计算该全色像素组的相位信息。具体的,针对目标像素组中的第一全色像素组,第一全色像素组包括按照3×3阵列排布的9个全色像素,依次编号为全色像素1、全色像素2、全色像素3、全色像素4、全色像素5、全色像素6、全色像素7、全色像素8、全色像素9。其中,每个像素对应一个感光元件。即全色像素1的相位信息为L1、全色像素2的相位信息为R1;全色像素8的相位信息为L2、全色像素9的相位信息为L2;全色像素3的相位信息为U1、全色像素6的相位信息为D1;全色像素4的相位信息为U1、全色像素7的相位信息为D1。其中,处于全色像素组正中心的像素5对应一个微透镜,因此,未得到相位信息。
同理,针对目标像素组中的第二全色像素组,第二全色像素组包括按照3×3阵列排布的9个全色 像素,依次编号为全色像素1、全色像素2、全色像素3、全色像素4、全色像素5、全色像素6、全色像素7、全色像素8、全色像素9。其中,每个像素对应一个感光元件。即全色像素1的相位信息为L1、全色像素2的相位信息为R1;全色像素8的相位信息为L2、全色像素9的相位信息为L2;全色像素3的相位信息为U1、全色像素6的相位信息为D1;全色像素4的相位信息为U1、全色像素7的相位信息为D1。其中,处于全色像素组正中心的像素5对应一个微透镜,因此,未得到相位信息。
因此,若确定与目标像素组对应的第一方向的目标微透镜,且将第一方向的目标微透镜所对应的至少两个像素作为目标像素。假设第一方向为水平方向,则将第一全色像素组对应的L1、R1、L2、R2依次输出,将第二全色像素组对应的L1、R1、L2、R2依次输出。再将目标像素的相位信息进行合并,生成多组目标合并相位信息。例如,将第一全色像素组对应的L1、L2与第二全色像素组对应的L1、L2进行合并,生成第一目标合并相位信息L;将第一全色像素组对应的R1、R2与第二全色像素组对应的R1、R2进行合并,生成第一目标合并相位信息R。
如此,依次对该像素阵列所包括的其他全色像素组均进行上述处理,再将第一全色像素组对应的L3、L4与第二全色像素组对应的L3、L4进行合并,生成第一目标合并相位信息L;将第一全色像素组对应的R3、R4与第二全色像素组对应的R3、R4进行合并,生成第一目标合并相位信息R。将第一全色像素组对应的L5、L6与第二全色像素组对应的L5、L6进行合并,生成第一目标合并相位信息L;将第一全色像素组对应的R6、R6与第二全色像素组对应的R5、R6进行合并,生成第一目标合并相位信息R。将第一全色像素组对应的L3、L4与第二全色像素组对应的L3、L4进行合并,生成第一目标合并相位信息L;将第一全色像素组对应的R7、R8与第二全色像素组对应的R7、R8进行合并,生成第一目标合并相位信息R。
最后,根据多组目标合并相位信息,生成与像素阵列对应的第二尺寸相位阵列。如此,由4组第一目标合并相位信息L、第一目标合并相位信息R依次排列生成了第二尺寸相位阵列。且该第二尺寸相位阵列的大小相当于阵列排布的4×2个像素的大小。
当然,还可以是对4组第一目标合并相位信息L、第一目标合并相位信息R进行合并处理或变换处理,生成与像素阵列对应的第二尺寸相位阵列。这里的变换处理,可以是对4组第一目标合并相位信息L、第一目标合并相位信息R进行校正等处理,本申请对此不做限定。在对4组第一目标合并相位信息L、第一目标合并相位信息R进行合并处理时,可以将相当于4×2个像素大小的相位阵列,合并为2×2个像素大小的相位阵列。当然,本申请并不对合并后的相位阵列的具体大小做限定。这里,像素的大小指的是一个像素的面积大小,该面积大小与像素的长与宽相关。
其中,像素是数码相机感光器件(CCD或CMOS)上的最小感光单位。一般情况下,像素没有固定的大小,像素的大小与显示屏的尺寸以及分辨率相关。例如,显示屏的尺寸为4.5英寸,且显示屏的分辨率为1280×720,则显示屏的长为99.6mm,宽为56mm,则一个像素的长为99.6mm/1280=0.0778mm,宽也为56mm/720=0.0778mm。在这个例子中,阵列排布的4×2个像素的大小为:长为4×0.0778mm,宽为2×0.0778mm。当然,本申请对此不做限定。那么,该全尺寸相位阵列的大小的长为4×0.0778mm,宽为2×0.0778mm。当然,在其他实施例中,像素也可以不是长宽相等的矩形,像素还可以其他异性结构,本申请对此不做限定。
同理,若确定与目标像素组对应的第二方向的目标微透镜,且将第二方向的目标微透镜所对应的至少两个像素作为目标像素。假设第二方向为竖直方向,则将第一全色像素组对应的U1、D1、U2、D2依次输出,将第二全色像素组对应的U1、D1、U2、D2依次输出。再将目标像素的相位信息进行合并,生成多组目标合并相位信息。例如,将第一全色像素组对应的U1、U2与第二全色像素组对应的U1、U2进行合并,生成第一目标合并相位信息U;将第一全色像素组对应的D1、D2与第二全色像素组对应的D1、D2进行合并,生成第一目标合并相位信息D。
如此,依次对该像素阵列所包括的其他全色像素组均进行上述处理,再将第一全色像素组对应的U3、U4与第二全色像素组对应的U3、U4进行合并,生成第一目标合并相位信息U;将第一全色像素组对应的D3、D4与第二全色像素组对应的D3、D4进行合并,生成第一目标合并相位信息D。将第一全色像素组对应的U5、U6与第二全色像素组对应的U5、U6进行合并,生成第一目标合并相位信息U; 将第一全色像素组对应的D6、D6与第二全色像素组对应的D5、D6进行合并,生成第一目标合并相位信息D。将第一全色像素组对应的U3、U4与第二全色像素组对应的U3、U4进行合并,生成第一目标合并相位信息U;将第一全色像素组对应的D7、D8与第二全色像素组对应的D7、D8进行合并,生成第一目标合并相位信息D。
最后,根据多组目标合并相位信息,生成与像素阵列对应的第二尺寸相位阵列。如此,由4组第一目标合并相位信息U、第一目标合并相位信息D依次排列生成了第二尺寸相位阵列。且该第二尺寸相位阵列的大小相当于阵列排布的4×2个像素的大小。
同理,若确定与目标像素组对应的第一方向的目标微透镜及第二方向的目标微透镜,且将第一方向的目标微透镜及第二方向的目标微透镜所对应的至少两个像素作为目标像素。假设第一方向为水平方向、第二方向为竖直方向,则将L1、R1、L2、R2、U1、D1、U2、D2依次输出,再将目标像素的相位信息进行合并,生成多组目标合并相位信息。例如,将第一全色像素组对应的L1、L2与第二全色像素组对应的L1、L2进行合并,生成第一目标合并相位信息L;将第一全色像素组对应的R1、R2与第二全色像素组对应的R1、R2进行合并,生成第一目标合并相位信息R。将第一全色像素组对应的U1、U2与第二全色像素组对应的U1、U2进行合并,生成第一目标合并相位信息U;将第一全色像素组对应的D1、D2与第二全色像素组对应的D1、D2进行合并,生成第一目标合并相位信息D。
那么,依次对该像素阵列所包括的其他全色像素组均进行上述处理,得到该像素阵列的相位信息。最后,将目标像素的相位信息进行合并,生成多组目标合并相位信息,将多组目标合并相位信息作为与该像素阵列对应的第二尺寸相位阵列。且第二全尺寸相位阵列的大小相当于阵列排布的4×4个像素的大小。
另一方面,假设将该像素阵列中的所有全色像素组及彩色像素组都作为目标像素组,针对各全色像素组及各彩色像素组,获取该全色像素组及各彩色像素组中每个目标像素的相位信息。将目标像素的相位信息进行合并,生成多组目标合并相位信息;最后,根据多组目标合并相位信息,生成与像素阵列对应的第二尺寸相位阵列。具体的计算过程,同将该像素阵列中的所有全色像素组都作为目标像素组时,计算第二尺寸相位阵列的过程,在此不再赘述。
此时,可以将相位阵列输入ISP,通过ISP基于相位阵列计算像素阵列的相位差。然后,基于相位差计算出离焦距离,并计算出于该离焦距离对应的DAC code值。最后,通过马达(VCM)的driver IC将code值转换为驱动电流,并由马达驱动镜头移动到清晰位置。从而,根据相位差实现了对焦控制。
本申请实施例中,在当前拍摄场景的光线强度小于第二预设阈值的情况下,因为此时的光线强度更弱,则通过彩色像素组所采集到的相位信息不是很准确,部分彩色像素组可能未采集到相位信息。因此,将像素阵列中沿第一对角线方向相邻的两个彩色像素组及沿第二对角线方向相邻的两个全色像素组作为目标像素组,或将沿第二对角线方向相邻的两个全色像素组作为目标像素组,且针对各目标像素组,获取目标像素组中每个像素的相位信息。且可以从第一方向的微透镜及第二方向的微透镜中的至少一种微透镜中获取目标像素的相位信息,因此,可以获取到多种相位信息。
将目标像素的相位信息进行合并,生成多组目标合并相位信息。根据多组目标合并相位信息,生成与像素阵列对应的第二尺寸相位阵列。采用第二尺寸输出模式将目标像素的相位信息进行大幅度地合并,提高所输出的相位信息的准确性,提高相位信息的信噪比。最终,基于与像素阵列对应的第二尺寸相位阵列进行相位对焦,就可以提高对焦的准确性。
接上一个实施例,若将第一方向的目标微透镜所对应的至少两个像素作为目标像素,则将目标像素的相位信息进行合并,生成多组目标合并相位信息,包括:
获取目标像素组内两个全色像素组的至少四个第一方向的目标微透镜,并从至少四个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第一方向上处于同一方位;
将处于同一位置的目标像素的相位信息进行合并,生成多组第一合并相位信息。
结合图16所示,若确定与目标像素组对应的第一方向的目标微透镜,且将第一全色像素组及第二全色像素组中第一方向的目标微透镜所对应的至少两个像素作为目标像素。其中,所述处于同一位置的 目标像素在对应的所述目标微透镜中的所述第一方向上处于同一方位。假设第一方向为水平方向,则将第一全色像素组对应的L1、R1、L2、R2依次输出,将第二全色像素组对应的L1、R1、L2、R2依次输出,再将目标像素的相位信息进行合并,生成多组目标合并相位信息。
具体的,从目标像素组内的第一全色像素组及第二全色像素组中,获取至少四个第一方向的目标微透镜(图中灰色填充),并从至少四个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素。例如,从第一全色像素组中的第一方向的目标微透镜2211a(图中灰色填充)对应的像素1、像素2、像素8、像素9中,以及第二全色像素组中的第一方向的目标微透镜2211a(图中灰色填充)对应的像素1、像素2、像素8、像素9中确定处于同一位置的目标像素。即确定处于第一方向的目标微透镜2211a(图中灰色填充)的左侧的像素为目标像素,确定处于第一方向的目标微透镜2211a(图中灰色填充)的右侧的像素为目标像素。即将第一全色像素组中的L1、L2、第二全色像素组中的L1、L2进行合并,生成第一目标合并相位信息L;将第一全色像素组中的R1、R2、第二全色像素组中的R1、R2进行合并,生成第一目标合并相位信息R。
基于多组第一目标合并相位信息L、第一目标合并相位信息R,生成多组第一合并相位信息。
本申请实施例中,获取目标像素组内的至少四个第一方向的目标微透镜,并从至少四个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素。将处于同一位置的目标像素的相位信息进行合并,生成多组第一合并相位信息。采用第二尺寸输出模式将像素的相位信息进行大幅度地合并,提高所输出的相位信息的准确性,提高相位信息的信噪比。最终,基于与像素阵列对应的第二尺寸相位阵列进行相位对焦,就可以提高对焦的准确性。
接上一个实施例,假设第一方向为竖直方向,这里将竖直方向命名为第二方向,若将第二方向的目标微透镜所对应的至少两个像素作为目标像素,则将目标像素组内的目标像素的相位信息进行合并,生成多组目标合并相位信息,包括:
获取目标像素组内的至少四个第二方向的目标微透镜,并从至少四个第二方向的目标微透镜对应的像素中确定处于处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第二方向上处于同一方位;
将处于同一位置的目标像素的相位信息进行合并,生成多组第二合并相位信息。
结合图16所示,若确定与目标像素组对应的第二方向的目标微透镜,且将第一全色像素组及第二全色像素组中第二方向的目标微透镜所对应的至少两个像素作为目标像素。所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第二方向上处于同一方位。假设第二方向为竖直方向,则将第一全色像素组对应的U1、D1、U2、D2依次输出,将第二全色像素组对应的U1、D1、U2、D2依次输出,再将目标像素的相位信息进行合并,生成多组目标合并相位信息。
具体的,从目标像素组内的第一全色像素组及第二全色像素组中,获取至少四个第二方向的目标微透镜(图中斜线填充),并从至少四个第二方向的目标微透镜对应的像素中确定处于同一位置的目标像素。例如,从第一全色像素组中的第二方向的目标微透镜2211a(图中斜线填充)对应的像素3、像素5、像素4、像素7中,以及第二全色像素组中的第二方向的目标微透镜2211a(图中斜线填充)对应的像素3、像素6、像素3、像素7中确定处于同一位置的目标像素。即确定处于第二方向的目标微透镜2211a(图中斜线填充)的上方的像素为目标像素,确定处于第二方向的目标微透镜2211a(图中斜线填充)的下方的像素为目标像素。即将第一全色像素组中的U1、U2、第二全色像素组中的U1、U2进行合并,生成第二目标合并相位信息U;将第一全色像素组中的D1、D2、第二全色像素组中的D1、D2进行合并,生成第二目标合并相位信息D。
基于多组第二目标合并相位信息U、第二目标合并相位信息D,生成多组第二合并相位信息。
本申请实施例中,获取目标像素组内的至少四个第二方向的目标微透镜,并从至少四个第二方向的目标微透镜对应的像素中确定处于同一位置的目标像素。将处于同一位置的目标像素的相位信息进行合并,生成多组第二合并相位信息。采用第二尺寸输出模式将像素的相位信息进行大幅度地合并,提高所输出的相位信息的准确性,提高相位信息的信噪比。最终,基于与像素阵列对应的第二尺寸相位阵列进行相位对焦,就可以提高对焦的准确性。
接上一个实施例,若将第一方向的目标微透镜所对应的至少两个像素及第二方向的目标微透镜所对应的至少两个像素,作为目标像素,则将目标像素组内两个全色像素组的目标像素的相位信息进行合并,生成多组目标合并相位信息,包括:
获取目标像素组内两个全色像素组的至少四个第一方向的目标微透镜,并从至少四个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素,
将处于同一位置的目标像素的相位信息进行合并,生成多组第一合并相位信息;
获取目标像素组内两个全色像素组的至少四个第二方向的目标微透镜,并从至少四个第二方向的目标微透镜对应的像素中确定处于处于同一位置的目标像素;
将处于同一位置的目标像素的相位信息进行合并,生成多组第二合并相位信息。
具体的,结合图16所示,若确定与目标像素组对应的第一方向的目标微透镜,且将第一全色像素组及第二全色像素组中第一方向的目标微透镜所对应的至少两个像素作为目标像素。假设第一方向为水平方向,则将第一全色像素组对应的L1、R1、L2、R2依次输出,将第二全色像素组对应的L1、R1、L2、R2依次输出,再将目标像素的相位信息进行合并,生成多组目标合并相位信息。
具体的,从目标像素组内的第一全色像素组及第二全色像素组中,获取至少四个第一方向的目标微透镜(图中灰色填充),并从至少四个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素。例如,从第一全色像素组中的第一方向的目标微透镜2211a(图中灰色填充)对应的像素1、像素2、像素8、像素9中,以及第二全色像素组中的第一方向的目标微透镜2211a(图中灰色填充)对应的像素1、像素2、像素8、像素9中确定处于同一位置的目标像素。即确定处于第一方向的目标微透镜2211a(图中灰色填充)的左侧的像素为目标像素,确定处于第一方向的目标微透镜2211a(图中灰色填充)的右侧的像素为目标像素。即将第一全色像素组中的L1、L2、第二全色像素组中的L1、L2进行合并,生成第一目标合并相位信息L;将第一全色像素组中的R1、R2、第二全色像素组中的R1、R2进行合并,生成第一目标合并相位信息R。
基于多组第一目标合并相位信息L、第一目标合并相位信息R,生成多组第一合并相位信息。
结合图16所示,若确定与目标像素组对应的第一方向的目标微透镜及第二方向的目标微透镜,且将第一全色像素组及第二全色像素组中第一方向的目标微透镜所对应的至少两个像素作为目标像素,将第一全色像素组及第二全色像素组中第二方向的目标微透镜所对应的至少两个像素也作为目标像素。假设第一方向为水平方向、第二方向为竖直方向,则将第一全色像素组对应的L1、R1、L2、R2依次输出,将第二全色像素组对应的L1、R1、L2、R2依次输出,将第一全色像素组对应的U1、D1、U2、D2依次输出,将第二全色像素组对应的U1、D1、U2、D2依次输出,再将目标像素的相位信息进行合并,生成多组目标合并相位信息。
具体的,从目标像素组内的第一全色像素组及第二全色像素组中,获取至少四个第一方向的目标微透镜(图中灰色填充),并从至少四个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素。例如,从第一全色像素组中的第一方向的目标微透镜2211a(图中灰色填充)对应的像素1、像素2、像素8、像素9中,以及第二全色像素组中的第一方向的目标微透镜2211a(图中灰色填充)对应的像素1、像素2、像素8、像素9中确定处于同一位置的目标像素。即确定处于第一方向的目标微透镜2211a(图中灰色填充)的左侧的像素为目标像素,确定处于第一方向的目标微透镜2211a(图中灰色填充)的右侧的像素为目标像素。即将第一全色像素组中的L1、L2、第二全色像素组中的L1、L2进行合并,生成第一目标合并相位信息L;将第一全色像素组中的R1、R2、第二全色像素组中的R1、R2进行合并,生成第一目标合并相位信息R。
进一步,从目标像素组内的第一全色像素组及第二全色像素组中,获取至少四个第二方向的目标微透镜(图中斜线填充),并从至少四个第二方向的目标微透镜对应的像素中确定处于同一位置的目标像素。例如,从第一全色像素组中的第二方向的目标微透镜2211a(图中斜线填充)对应的像素3、像素5、像素4、像素7中,以及第二全色像素组中的第二方向的目标微透镜2211a(图中斜线填充)对应的像素3、像素6、像素3、像素7中确定处于同一位置的目标像素。即确定处于第二方向的目标微透镜2211a(图中斜线填充)的上方的像素为目标像素,确定处于第二方向的目标微透镜2211a(图中斜线 填充)的下方的像素为目标像素。即将第一全色像素组中的U1、U2、第二全色像素组中的U1、U2进行合并,生成第二目标合并相位信息U;将第一全色像素组中的D1、D2、第二全色像素组中的D1、D2进行合并,生成第二目标合并相位信息D。
基于多组第一目标合并相位信息L、第一目标合并相位信息R,以及多组第二目标合并相位信息U、第二目标合并相位信息D,生成多组第二合并相位信息。
本申请实施例中,获取目标像素组内两个全色像素组的至少四个第一方向的目标微透镜、至少四个第二方向的目标微透镜,并从至少四个第一方向的目标微透镜、至少四个第二方向的目标微透镜对应的像素中确定处于同一位置的目标像素。将处于同一位置的目标像素的相位信息进行合并,生成多组第二合并相位信息。因此,可以同时基于第一方向及第二方向的目标微透镜获取到相位信息,进而采用第二尺寸输出模式将像素的相位信息进行大幅度地合并,提高所输出的相位信息的准确性,提高相位信息的信噪比。最终,基于与像素阵列对应的第二尺寸相位阵列进行相位对焦,就可以提高对焦的准确性。
在一个实施例中,若目标像素组为像素阵列中沿第一对角线方向相邻的两个彩色像素组及沿第二对角线方向相邻的两个全色像素组,则根据多组目标合并相位信息,生成与像素阵列对应的第二尺寸相位阵列,包括:
根据当前拍摄场景的光线强度确定彩色像素组对应的第三相位权重以及全色像素组对应的第四相位权重;其中,彩色像素组在不同的光线强度下所对应的第三相位权重不同,全色像素组在不同的光线强度下所对应的第四相位权重不同;
基于彩色像素组的目标相位信息及第三相位权重、全色像素组的目标相位信息及第四相位权重,生成像素阵列在第二方向的第二尺寸相位阵列。
具体的,在当前拍摄场景的光线强度小于第二预设阈值的场景下,且所确定的目标像素组包括像素阵列中沿第一对角线方向相邻的两个彩色像素组及沿第二对角线方向相邻的两个全色像素组,那么,基于目标像素组的相位信息生成第二尺寸相位阵列时,可以考虑不同像素组之间的权重。其中,可以根据当前拍摄场景的光线强度确定彩色像素组对应的第三相位权重以及全色像素组对应的第四相位权重。具体的,当前拍摄场景的光线强度越远离第二预设阈值,则此时彩色像素组对应的第三相位权重越小,而全色像素组对应的第四相位权重越大。因为此时光线强度在小于第二预设阈值的场景下偏小,全色像素组对应的第四相位权重越大,则所获取到的相位信息越准确。随着光线强度增大,当前拍摄场景的光线强度越接近第二预设阈值,则此时彩色像素组对应的第三相位权重越大,而全色像素组对应的第四相位权重越小。因为此时光线强度在小于第二预设阈值的场景下偏大,彩色像素组对应的第三相位权重越大,则所获取到的相位信息越全面、越准确。其中,彩色像素组在不同的光线强度下所对应的第三相位权重不同,全色像素组在不同的光线强度下所对应的第四相位权重不同。例如,当前拍摄场景的光线强度为450lux时,确定彩色像素组对应的第三相位权重为40%,其中,绿色像素组的相位权重为20%,红色像素组的相位权重为10%,蓝色像素组的相位权重为10%。并确定全色像素组对应的第四相位权重为60%,本申请对此不做限定。
然后,就可以基于彩色像素组的多组目标合并相位信息及第三相位权重、全色像素组的多组目标合并相位信息及第四相位权重,生成像素阵列的第二尺寸相位阵列。例如,针对该像素阵列,基于第一个红色像素组的多组目标合并相位信息及相位权重10%、第二个红色像素组的多组目标合并相位信息及相位权重10%,第一个蓝色像素组的多组目标合并相位信息及相位权重10%、第二个蓝色像素组的多组目标合并相位信息及相位权重10%,以及各个绿色像素组的多组目标合并相位信息及相位权重20%,还有各个全色像素组的多组目标合并相位信息及相位权重60%,共同求和计算出像素阵列在第一方向的相位信息,即得到了第二尺寸相位阵列。
本申请实施例中,在进行相位对焦时,若确定目标像素组为像素阵列中沿第一对角线方向相邻的两个彩色像素组及沿第二对角线方向相邻的两个全色像素组,则可以基于彩色像素组的多组目标合并相位信息及其第三相位权重、全色像素组的多组目标合并相位信息及其第四相位权重,生成像素阵列的第二尺寸相位阵列。如此,基于彩色像素组及全色像素组的目标相位信息,共同生成像素阵列的第二尺寸相位阵列,可以提高相位信息的全面性。同时,在不同的光线强度下彩色像素组及全色像素组的目标相位 信息的相位权重不同,如此,能够在不同的光线强度下通过调节权重大小来提高相位信息的准确性。
在一个实施例中,在按照相位信息输出模式,输出与像素阵列对应的相位阵列之前,还包括:
根据用于对焦控制的像素阵列的预设提取比例及预设提取位置,从图像传感器中的多个像素阵列中确定目标像素阵列;
按照相位信息输出模式,输出与像素阵列对应的相位阵列,包括:
按照相位信息输出模式,输出与目标像素阵列对应的相位阵列。
具体的,图像传感器的面积较大,所包含最小单元的像素阵列数以万计,若从图像传感器中提取所有的相位信息进行相位对焦,则因为相位信息数据量太大,导致实际计算量过大,因此,浪费系统资源、降低图像处理速度。
为了节约系统资源,并提高图像处理速度,可以预先按照预设提取比例及预设提取位置,从图像传感器中的多个像素阵列中提取用于对焦控制的像素阵列。例如,可以按照3%的预设提取比例进行提取,即从32个像素阵列中提取一个像素阵列作为用于对焦控制的像素阵列。且以所提取的像素阵列作为六边形的顶点进行排布,即所提取的像素阵列构成了六边形。如此,能够均匀地获取到相位信息。当然,本申请并不对预设提取比例及预设提取位置进行限定。
然后,就可以根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式。并针对用于对焦控制的像素阵列,按照相位信息输出模式,输出与该像素阵列对应的相位阵列;其中,相位阵列包括像素阵列中目标像素对应的相位信息。最后,基于相位阵列计算像素阵列的相位差,并根据相位差进行对焦控制。
本申请实施例中,根据用于对焦控制的像素阵列的预设提取比例及预设提取位置,从图像传感器中的多个像素阵列中确定目标像素阵列。如此,不需要采用图像传感器中的所有相位信息进行对焦,而仅仅采用与目标像素阵列对应的相位信息进行对焦,大大减小了数据量,提高了图像处理的速度。同时,按照预设提取位置,从图像传感器中的多个像素阵列中确定目标像素阵列,能够更加均匀地获取到相位信息。最终,提高相位对焦的准确性。
在一个实施例中,提供了一种对焦控制方法,还包括:
根据曝光参数及像素的尺寸确定光线强度的第一预设阈值及第二预设阈值。
具体的,在确定光线强度的阈值时,可以根据曝光参数及像素的尺寸来确定。其中,曝光参数包括括快门速度、镜头光圈大小及感光度(ISO,light sensibility ordinance)。
本申请实施例中,根据曝光参数及像素的尺寸确定光线强度的第一预设阈值及第二预设阈值,将光线强度范围划分为3个范围,从而,在每个光线强度范围内采用与该光线强度范围对应的相位信息输出模式,从而,实现了更加精细化地计算相位信息。
在一个实施例中,如图17所示,提供了一种对焦控制装置1700,应用于图像传感器,该装置包括:
相位信息输出模式确定模块1720,用于根据当前拍摄场景的光线强度,确定与当前拍摄场景的光线强度适配的相位信息输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同;
相位阵列输出模块1740,用于按照相位信息输出模式,输出与像素阵列对应的相位阵列;其中,相位阵列包括像素阵列中目标像素对应的相位信息;
对焦控制模块1760,用于基于相位阵列计算像素阵列的相位差,并根据相位差进行对焦控制。
在一个实施例中,如图18所示,相位阵列输出模块1740,包括:
目标像素确定单元1742,用于按照相位信息输出模式,从像素阵列中的彩色像素组及全色像素组中确定目标像素组,确定与目标像素组对应的目标微透镜,并将目标微透镜所对应的至少两个像素作为目标像素;
相位信息生成单元1744,用于针对各目标像素组,获取目标像素的相位信息;
相位阵列生成单元1746,用于根据目标像素的相位信息,生成与像素阵列对应的相位阵列。
在一个实施例中,目标像素确定单元1742,还用于确定与目标像素组对应的第一方向的目标微透镜;第一方向的目标微透镜对应沿第一方向排布且相邻的至少2个目标像素;将第一方向的目标微透镜所对应的至少两个像素作为目标像素。
在一个实施例中,目标像素确定单元1742,还用于确定与目标像素组对应的第二方向的目标微透镜;第二方向的目标微透镜对应沿第二方向排布且相邻的至少2个目标像素;将第二方向的目标微透镜所对应的至少两个像素作为目标像素。
在一个实施例中,目标像素确定单元1742,还用于确定与目标像素组对应的第一方向的目标微透镜及第二方向的目标微透镜;第一方向的目标微透镜对应沿第一方向排布且相邻的至少2个目标像素;第二方向的目标微透镜对应沿第二方向排布且相邻的至少2个目标像素;第二方向与第一方向相互垂直;将第一方向的目标微透镜所对应的至少两个像素及第二方向的目标微透镜所对应的至少两个像素,作为目标像素。
在一个实施例中,相位信息输出模式确定模块1720,还用于确定当前拍摄场景的光线强度所属的目标光线强度范围;其中,不同的光线强度范围对应不同的相位信息输出模式;根据目标光线强度范围,确定与当前拍摄场景的光线强度适配的相位信息输出模式。
在一个实施例中,相位信息输出模式包括全尺寸输出模式及第一尺寸输出模式,全尺寸输出模式下的相位阵列的大小大于或等于第一尺寸输出模式下相位阵列的大小;
相位信息输出模式确定模块1720,包括:
全尺寸输出模式确定单元,用于若当前拍摄场景的光线强度大于第一预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为全尺寸输出模式;
第一尺寸输出模式确定单元,用于若当前拍摄场景的光线强度大于第二预设阈值,且小于或等于第一预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为第一尺寸输出模式;第一预设阈值大于第二预设阈值。
在一个实施例中,若相位信息输出模式为全尺寸输出模式,则目标像素确定单元1742,还用于将像素阵列中的彩色像素组作为目标像素组;,或将所述像素阵列中的彩色像素组和全色像素组作为目标像素组;其中,一个所述目标像素组对应一个像素组;
相应的,
相位阵列生成单元1746,还用于根据目标像素的相位信息,生成与像素阵列对应的全尺寸相位阵列。
在一个实施例中,若相位信息输出模式为第一尺寸输出模式,则目标像素确定单元1742,还用于将像素阵列中的彩色像素组、全色像素组中的至少一种作为目标像素组;所述目标像素组对应一个像素组;
相应的,
相位阵列生成单元1746,还用于将所述目标像素组内的目标像素的相位信息进行合并,生成多组中间合并相位信息;根据多组中间合并相位信息,生成与像素阵列对应的第一尺寸相位阵列。
在一个实施例中,若目标像素为第一方向的目标微透镜所对应的至少两个像素,则相位阵列生成单元1746,还用于获取目标像素组内的至少两个第一方向的目标微透镜,并从至少两个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第一方向上处于同一方位;将处于同一位置的目标像素的相位信息进行合并,生成多组第一合并相位信息。
在一个实施例中,若目标像素为第二方向的目标微透镜所对应的至少两个像素,则相位阵列生成单元1746,还用于获取目标像素组内的至少两个第二方向的目标微透镜,并从至少两个第二方向的目标微透镜对应的像素中确定处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第二方向上处于同一方位;将处于同一位置的目标像素的相位信息进行合并,生成多组第二合并相位信息。
在一个实施例中,若目标像素为第一方向的目标微透镜所对应的至少两个像素及第二方向的目标微透镜所对应的至少两个像素,则相位阵列生成单元1746,还用于获取目标像素组内的至少两个第一方向的目标微透镜,并从至少两个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第一方向上处于同一方位;将处于同一位 置的目标像素的相位信息进行合并,生成多组第一合并相位信息;获取目标像素组内的至少两个第二方向的目标微透镜,并从至少两个第二方向的目标微透镜对应的像素中确定处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第二方向上处于同一方位;将处于同一位置的目标像素的相位信息进行合并,生成多组第二合并相位信息;基于多组第一合并相位信息及多组第二合并相位信息,生成多组中间合并相位信息。
在一个实施例中,若目标像素组包括彩色像素组及全色像素组,则相位阵列生成单元1746,还用于根据当前拍摄场景的光线强度确定彩色像素组对应的第一相位权重以及全色像素组对应的第二相位权重;其中,彩色像素组在不同的光线强度下所对应的第一相位权重不同,全色像素组在不同的光线强度下所对应的第二相位权重不同;基于彩色像素组对应的多组中间合并相位信息及第一相位权重、全色像素组对应的多组中间合并相位信息及第二相位权重,生成像素阵列的第一尺寸相位阵列。
在一个实施例中,相位信息输出模式还包括第二尺寸输出模式;其中,第一尺寸输出模式下的相位阵列的大小大于或等于第二尺寸输出模式下相位阵列的大小;
相位信息输出模式确定模块1720,包括:
第二尺寸输出模式确定单元,用于若当前拍摄场景的光线强度小于第二预设阈值,则确定与当前拍摄场景的光线强度适配的相位信息输出模式为第二尺寸输出模式。
在一个实施例中,若相位信息输出模式为第二尺寸输出模式,则目标像素确定单元1742,还用于将所述像素阵列中沿第一对角线方向相邻的两个彩色像素组作为目标像素组及沿第二对角线方向相邻的两个全色像素组作为目标像素组;
相位阵列生成单元1746,还用于将所述目标像素组内两个全色像素组的所述目标像素的相位信息进行合并,将所述目标像素组内两个彩色像素组的所述目标像素的相位信息进行合并,生成多组目标合并相位信息;根据多组目标合并相位信息,生成与所述像素阵列对应的第二尺寸相位阵列。
在一个实施例中,若相位信息输出模式为第二尺寸输出模式,则目标像素确定单元1742,还用于将沿第二对角线方向相邻的两个全色像素组作为目标像素组;
相位阵列生成单元1746,还用于将所述目标像素组内两个全色像素组的所述目标像素的相位信息进行合并,生成多组目标合并相位信息;根据多组目标合并相位信息,生成与所述像素阵列对应的第二尺寸相位阵列。
在一个实施例中,若将第一方向的目标微透镜所对应的至少两个像素作为目标像素,则相位阵列生成单元1746,还用于获取目标像素组内两个全色像素组的至少四个第一方向的目标微透镜,并从至少四个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第一方向上处于同一方位;将处于同一位置的目标像素的相位信息进行合并,生成多组第一合并相位信息。
在一个实施例中,若将第二方向的目标微透镜所对应的至少两个像素作为目标像素,则相位阵列生成单元1746,还用于获取目标像素组两个全色像素组内的至少四个第二方向的目标微透镜,并从至少四个第二方向的目标微透镜对应的像素中确定处于处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第二方向上处于同一方位;将处于同一位置的目标像素的相位信息进行合并,生成多组第二合并相位信息。
在一个实施例中,若将第一方向的目标微透镜所对应的至少两个像素及第二方向的目标微透镜所对应的至少两个像素,作为目标像素,则相位阵列生成单元1746,还用于
获取目标像素组内两个全色像素组的至少四个第一方向的目标微透镜,并从至少四个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第一方向上处于同一方位;将处于同一位置的目标像素的相位信息进行合并,生成多组第一合并相位信息;
获取目标像素组内两个全色像素组的至少四个第二方向的目标微透镜,并从至少四个第二方向的目标微透镜对应的像素中确定处于处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第二方向上处于同一方位;将处于同一位置的目标像素的相位信息进行合并,生成 多组第二合并相位信息。
在一个实施例中,若目标像素组为像素阵列中沿第一对角线方向相邻的两个彩色像素组及沿第二对角线方向相邻的两个全色像素组,则相位阵列生成单元1746,还用于根据当前拍摄场景的光线强度确定彩色像素组对应的第三相位权重以及全色像素组对应的第四相位权重;其中,彩色像素组在不同的光线强度下所对应的第三相位权重不同,全色像素组在不同的光线强度下所对应的第四相位权重不同;基于彩色像素组对应的多组目标合并相位信息及第三相位权重、全色像素组对应的多组目标合并相位信息及第四相位权重,生成像素阵列的第二尺寸相位阵列。
在一个实施例中,提供了一种对焦控制装置,还包括:
目标像素阵列确定模块,用于根据用于对焦控制的像素阵列的预设提取比例及预设提取位置,从图像传感器中的多个像素阵列中确定目标像素阵列;
相位阵列输出模块1740,还用于按照相位信息输出模式,输出与目标像素阵列对应的相位阵列。
在一个实施例中,提供了一种对焦控制装置,还包括:
阈值确定模块,用于根据曝光参数及像素的尺寸确定光线强度的第一预设阈值、第二预设阈值及第三预设阈值。
应该理解的是,虽然上述流程图中的各个操作按照箭头的指示依次显示,但是这些操作并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些操作的执行并没有严格的顺序限制,这些操作可以以其它的顺序执行。而且,上述流程图中的至少一部分操作可以包括多个子操作或者多个阶段,这些子操作或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子操作或者阶段的执行顺序也不必然是依次进行,而是可以与其它操作或者其它操作的子操作或者阶段的至少一部分轮流或者交替地执行。
上述对焦控制装置中各个模块的划分仅仅用于举例说明,在其他实施例中,可将对焦控制装置按照需要划分为不同的模块,以完成上述对焦控制装置的全部或部分功能。
关于对焦控制装置的具体限定可以参见上文中对于对焦控制方法的限定,在此不再赘述。上述对焦控制装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
图19为一个实施例中电子设备的内部结构示意图。该电子设备可以是手机、平板电脑、笔记本电脑、台式电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑、穿戴式设备等任意终端设备。该电子设备包括通过系统总线连接的处理器和存储器。其中,该处理器可以包括一个或多个处理单元。处理器可为CPU(Central Processing Unit,中央处理单元)或DSP(Digital Signal Processing,数字信号处理器)等。存储器可包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种对焦控制方法。内存储器为非易失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。
本申请实施例中提供的对焦控制装置中的各个模块的实现可为计算机程序的形式。该计算机程序可在电子设备上运行。该计算机程序构成的程序模块可存储在电子设备的存储器上。该计算机程序被处理器执行时,实现本申请实施例中所描述方法的操作。
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当计算机可执行指令被一个或多个处理器执行时,使得处理器执行对焦控制方法的操作。
本申请实施例还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行对焦控制方法。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储 器。非易失性存储器可包括ROM(Read-Only Memory,只读存储器)、PROM(Programmable Read-only Memory,可编程只读存储器)、EPROM(Erasable Programmable Read-Only Memory,可擦除可编程只读存储器)、EEPROM(Electrically Erasable Programmable Read-only Memory,电可擦除可编程只读存储器)或闪存。易失性存储器可包括RAM(Random Access Memory,随机存取存储器),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如SRAM(Static Random Access Memory,静态随机存取存储器)、DRAM(Dynamic Random Access Memory,动态随机存取存储器)、SDRAM(Synchronous Dynamic Random Access Memory,同步动态随机存取存储器)、双数据率DDR SDRAM(Double Data Rate Synchronous Dynamic Random Access memory,双数据率同步动态随机存取存储器)、ESDRAM(Enhanced Synchronous Dynamic Random Access memory,增强型同步动态随机存取存储器)、SLDRAM(Sync Link Dynamic Random Access Memory,同步链路动态随机存取存储器)、RDRAM(Rambus Dynamic Random Access Memory,总线式动态随机存储器)、DRDRAM(Direct Rambus Dynamic Random Access Memory,接口动态随机存储器)。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (33)

  1. 一种图像传感器,其特征在于,所述图像传感器包括微透镜阵列、像素阵列及滤光片阵列,所述滤光片阵列包括最小重复单元,所述最小重复单元包括多个滤光片组,所述滤光片组包括彩色滤光片和全色滤光片;所述彩色滤光片具有比所述全色滤光片更窄的光谱响应,所述彩色滤光片和所述全色滤光片均包括阵列排布的9个子滤光片;
    其中,所述像素阵列包括多个像素组,所述像素组为全色像素组或彩色像素组,每个所述全色像素组对应所述全色滤光片,每个所述彩色像素组对应所述彩色滤光片;所述全色像素组和所述彩色像素组均包括9个像素,所述像素阵列的像素与所述滤光片阵列的子滤光片对应设置,且每个像素对应一个感光元件;
    其中,所述微透镜阵列包括多个微透镜组,每个微透镜组对应所述像素组,所述微透镜组中包括多个微透镜,所述多个微透镜中的至少一个微透镜对应至少两个像素。
  2. 根据权利要求1所述的图像传感器,其特征在于,所述滤光片组为4个,4个所述滤光片组呈矩阵排列。
  3. 根据权利要求2所述的图像传感器,其特征在于,在每个所述滤光片组中,所述全色滤光片设置在第一对角线方向,所述彩色滤光片设置在第二对角线方向,所述第一对角线方向与所述第二对角线方向不同。
  4. 根据权利要求3所述的图像传感器,其特征在于,所述滤光片组中包含2个全色滤光片和2个彩色滤光片,所述最小重复单元为12行12列144个所述子滤光片,排布方式为:
    Figure PCTCN2022132425-appb-100001
    其中,w表示全色子滤光片,a、b和c均表示彩色子滤光片。
  5. 根据权利要求3所述的图像传感器,其特征在于,所述滤光片组中包含2个全色滤光片和2个彩色滤光片,所述最小重复单元为12行12列144个所述子滤光片,排布方式为:
    Figure PCTCN2022132425-appb-100002
    其中,w表示全色子滤光片,a、b和c均表示彩色子滤光片。
  6. 根据权利要求1所述的图像传感器,其特征在于,所述微透镜组中包括5个微透镜,所述5个微透镜包括4个第一微透镜及1个第二微透镜;其中,所述第一微透镜分别对应2个像素,所述第二微透镜对应1个像素。
  7. 根据权利要求6所述的图像传感器,其特征在于,4个所述第一微透镜中包括2个第一方向的第一微透镜及2个第二方向的第一微透镜;所述第一方向的第一微透镜在所述微透镜组中沿所述第一方向设置;所述第二方向的第一微透镜在所述微透镜组中沿第二方向设置;
    所述第二微透镜位于所述微透镜组的中心,且所述第二微透镜对应与所述微透镜组对应的所述像素组的中心像素,或,所述第二微透镜位于所述微透镜组的四角之一,且所述第二微透镜对应与所述微透镜组对应的所述像素组中处于四角之一的像素。
  8. 根据权利要求7所述的图像传感器,其特征在于,若所述第二微透镜位于所述微透镜组的中心,则2个所述第一方向的第一微透镜及2个所述第二方向的第一微透镜围绕所述第二微透镜设置;
    其中,2个所述第一方向的第一微透镜相对于所述第二微透镜沿第一对角线方向呈中心对称设置;2个所述第二方向的第一微透镜相对于所述第二微透镜沿第二对角线方向呈中心对称设置。
  9. 根据权利要求1所述的图像传感器,其特征在于,所述多个微透镜中的一个微透镜对应至少四个像素。
  10. 根据权利要求9所述的图像传感器,其特征在于,所述微透镜组中包括4个微透镜,所述4个微透镜包括1个第三微透镜、2个第四微透镜及1个第五微透镜;其中,所述第三微透镜分别对应4个像素,所述第四微透镜对应2个像素,所述第五微透镜对应1个像素。
  11. 一种对焦控制方法,其特征在于,应用于如权利要求1至10中任一项所述的图像传感器,所述方法包括:
    根据当前拍摄场景的光线强度,确定与所述当前拍摄场景的光线强度适配的相位信息输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同;
    按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列;其中,所述相位阵列包括所述像素阵列中目标像素对应的相位信息;
    基于所述相位阵列计算所述像素阵列的相位差,并根据所述相位差进行对焦控制。
  12. 根据权利要求11所述的方法,其特征在于,按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列,包括:
    按照所述相位信息输出模式,从所述像素阵列中的所述彩色像素组及所述全色像素组中确定目标像素组,确定与所述目标像素组对应的目标微透镜,并将所述目标微透镜所对应的至少两个像素作为目标像素;
    针对各所述目标像素组,获取所述目标像素的相位信息;
    根据所述目标像素的相位信息,生成与所述像素阵列对应的相位阵列。
  13. 根据权利要求12所述的方法,其特征在于,确定与所述目标像素组对应的目标微透镜,将所述目标微透镜所对应的至少两个像素作为目标像素,包括:
    确定与所述目标像素组对应的第一方向的目标微透镜;所述第一方向的目标微透镜对应沿第一方向排布且相邻的至少2个目标像素;
    将所述第一方向的目标微透镜所对应的至少两个像素作为目标像素。
  14. 根据权利要求12所述的方法,其特征在于,确定与所述目标像素组对应的目标微透镜,将所述目标微透镜所对应的至少两个像素作为目标像素,包括:
    确定与所述目标像素组对应的第一方向的目标微透镜及第二方向的目标微透镜;所述第一方向的目标微透镜对应沿第一方向排布且相邻的至少2个目标像素;所述第二方向的目标微透镜对应沿第二方向排布且相邻的至少2个目标像素;所述第二方向与所述第一方向相互垂直;
    将所述第一方向的目标微透镜所对应的至少两个像素及所述第二方向的目标微透镜所对应的至少两个像素,作为目标像素。
  15. 根据权利要求13-14中任一项所述的方法,其特征在于,所述根据当前拍摄场景的光线强度,确定与所述当前拍摄场景的光线强度适配的相位信息输出模式,包括:
    确定当前拍摄场景的光线强度所属的目标光线强度范围;其中,不同的光线强度范围对应不同的相位信息输出模式;
    根据所述目标光线强度范围,确定与所述当前拍摄场景的光线强度适配的相位信息输出模式。
  16. 根据权利要求15所述的方法,其特征在于,所述相位信息输出模式包括全尺寸输出模式及第一尺寸输出模式,所述全尺寸输出模式下的相位阵列的大小大于或等于所述第一尺寸输出模式下相位阵列的大小;
    所述根据所述目标光线强度范围,确定与所述当前拍摄场景的光线强度适配的相位信息输出模式,包括:
    若所述当前拍摄场景的光线强度大于第一预设阈值,则确定与所述当前拍摄场景的光线强度适配的相位信息输出模式为所述全尺寸输出模式;
    若所述当前拍摄场景的光线强度大于第二预设阈值,且小于或等于所述第一预设阈值,则确定与所述当前拍摄场景的光线强度适配的相位信息输出模式为所述第一尺寸输出模式;所述第一预设阈值大于所述第二预设阈值。
  17. 根据权利要求16所述的方法,其特征在于,若所述相位信息输出模式为全尺寸输出模式,则所述从所述像素阵列中的所述彩色像素组及所述全色像素组中作为目标像素组,包括:
    将所述像素阵列中的彩色像素组作为目标像素组,或将所述像素阵列中的彩色像素组和全色像素组作为目标像素组;其中,一个所述目标像素组对应一个像素组;
    相应的,
    所述根据所述目标像素的相位信息,生成与所述像素阵列对应的相位阵列,包括:
    根据所述目标像素的相位信息,生成与所述像素阵列对应的全尺寸相位阵列。
  18. 根据权利要求16所述的方法,其特征在于,若所述相位信息输出模式为第一尺寸输出模式,则所述从所述像素阵列中的所述彩色像素组及所述全色像素组中作为目标像素组,包括:
    将所述像素阵列中的彩色像素组、全色像素组中的至少一种作为目标像素组;一个所述目标像素组对应一个像素组;
    相应的,
    所述根据所述目标像素的相位信息,生成与所述像素阵列对应的相位阵列,包括:
    将所述目标像素组内的所述目标像素的相位信息进行合并,生成多组中间合并相位信息;
    根据所述多组中间合并相位信息,生成与所述像素阵列对应的第一尺寸相位阵列。
  19. 根据权利要求18所述的方法,其特征在于,若所述目标像素为所述第一方向的目标微透镜所对应的至少两个像素,则所述将所述目标像素的相位信息进行合并,生成多组中间合并相位信息,包括:
    获取所述目标像素组内的至少两个第一方向的目标微透镜,并从所述至少两个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第一方向上处于同一方位;
    将所述处于同一位置的目标像素的相位信息进行合并,生成多组第一合并相位信息。
  20. 根据权利要求18所述的方法,其特征在于,若所述目标像素为所述第一方向的目标微透镜所对应的至少两个像素及所述第二方向的目标微透镜所对应的至少两个像素,则所述将所述目标像素组内的所述目标像素的相位信息进行合并,生成多组中间合并相位信息,包括:
    获取所述目标像素组内的至少两个第一方向的目标微透镜,并从所述至少两个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第一方向上处于同一方位;
    将所述处于同一位置的目标像素的相位信息进行合并,生成多组第一合并相位信息;
    获取所述目标像素组内的至少两个第二方向的目标微透镜,并从所述至少两个第二方向的目标微 透镜对应的像素中确定处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第二方向上处于同一方位;
    将所述处于同一位置的目标像素的相位信息进行合并,生成多组第二合并相位信息;
    基于多组第一合并相位信息及多组第二合并相位信息,生成多组中间合并相位信息。
  21. 根据权利要求18所述的方法,其特征在于,若所述目标像素组包括彩色像素组及全色像素组,则所述根据所述多组中间合并相位信息,生成与所述像素阵列对应的第一尺寸相位阵列,包括:
    根据所述当前拍摄场景的光线强度确定所述彩色像素组对应的第一相位权重以及所述全色像素组对应的第二相位权重;
    基于所述彩色像素组对应的多组中间合并相位信息及所述第一相位权重、所述全色像素组对应的多组中间合并相位信息及所述第二相位权重,生成所述像素阵列的第一尺寸相位阵列。
  22. 根据权利要求16所述的方法,其特征在于,所述相位信息输出模式还包括第二尺寸输出模式;其中,所述第一尺寸输出模式下的相位阵列的大小大于或等于所述第二尺寸输出模式下相位阵列的大小;
    所述根据所述目标光线强度范围确定与所述当前拍摄场景的光线强度适配的相位信息输出模式,包括:
    若所述当前拍摄场景的光线强度小于第二预设阈值,则确定与所述当前拍摄场景的光线强度适配的相位信息输出模式为所述第二尺寸输出模式。
  23. 根据权利要求22所述的方法,其特征在于,若所述相位信息输出模式为所述第二尺寸输出模式,则所述从所述像素阵列中的所述彩色像素组及所述全色像素组中作为目标像素组,包括:
    将所述像素阵列中沿第一对角线方向相邻的两个彩色像素组作为目标像素组及沿第二对角线方向相邻的两个全色像素组作为目标像素组;
    所述根据所述目标像素的相位信息,生成与所述像素阵列对应的相位阵列,包括:
    将所述目标像素组内两个全色像素组的所述目标像素的相位信息进行合并,将所述目标像素组内两个彩色像素组的所述目标像素的相位信息进行合并,生成多组目标合并相位信息;
    根据多组目标合并相位信息,生成与所述像素阵列对应的第二尺寸相位阵列。
  24. 根据权利要求22所述的方法,其特征在于,若所述相位信息输出模式为所述第二尺寸输出模式,则所述从所述像素阵列中的所述彩色像素组及所述全色像素组中作为目标像素组,包括:
    将所述沿第二对角线方向相邻的两个全色像素组作为目标像素组;
    所述根据所述目标像素的相位信息,生成与所述像素阵列对应的相位阵列,包括:
    将所述目标像素组内两个全色像素组的所述目标像素的相位信息进行合并,生成多组目标合并相位信息;
    根据多组目标合并相位信息,生成与所述像素阵列对应的第二尺寸相位阵列。
  25. 根据权利要求24所述的方法,其特征在于,若将所述第一方向的目标微透镜所对应的至少两个像素作为目标像素,则所述将所述目标像素组内两个全色像素组的所述目标像素的相位信息进行合并,生成多组目标合并相位信息,包括:
    获取所述目标像素组内两个全色像素组的至少四个第一方向的目标微透镜,并从所述至少四个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第一方向上处于同一方位;
    将所述处于同一位置的目标像素的相位信息进行合并,生成多组第一合并相位信息。
  26. 根据权利要求24所述的方法,其特征在于,若将所述第一方向的目标微透镜所对应的至少两个像素及所述第二方向的目标微透镜所对应的至少两个像素,作为目标像素,则所述将所述目标像素组内两个全色像素组的所述目标像素的相位信息进行合并,生成多组目标合并相位信息,包括:
    获取所述目标像素组内两个全色像素组的至少四个第一方向的目标微透镜,并从所述至少四个第一方向的目标微透镜对应的像素中确定处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第一方向上处于同一方位;
    将所述处于同一位置的目标像素的相位信息进行合并,生成多组第一合并相位信息;
    获取所述目标像素组内两个全色像素组的至少四个第二方向的目标微透镜,并从所述至少四个第二方向的目标微透镜对应的像素中确定处于处于同一位置的目标像素;所述处于同一位置的目标像素在对应的所述目标微透镜中的所述第二方向上处于同一方位;
    将所述处于同一位置的目标像素的相位信息进行合并,生成多组第二合并相位信息。
  27. 根据权利要求23所述的方法,其特征在于,所述根据多组目标合并相位信息,生成与所述像素阵列对应的第二尺寸相位阵列,包括:
    根据所述当前拍摄场景的光线强度确定所述彩色像素组对应的第三相位权重以及所述全色像素组对应的第四相位权重;
    基于所述彩色像素组对应的多组目标合并相位信息及所述第三相位权重、所述全色像素组对应的多组目标合并相位信息及所述第四相位权重,生成所述像素阵列的第二尺寸相位阵列。
  28. 根据权利要求11所述的方法,其特征在于,在所述按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列之前,还包括:
    根据用于对焦控制的像素阵列的预设提取比例及预设提取位置,从所述图像传感器中的多个像素阵列中确定目标像素;
    所述按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列,包括:
    按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列。
  29. 根据权利要求16所述的方法,其特征在于,所述方法还包括:
    根据曝光参数及所述像素的尺寸确定所述光线强度的第一预设阈值及第二预设阈值。
  30. 一种对焦控制装置,其特征在于,应用于如权利要求1至10中任一项所述的图像传感器,所述装置包括:
    相位信息输出模式确定模块,用于根据当前拍摄场景的光线强度,确定与所述当前拍摄场景的光线强度适配的相位信息输出模式;其中,在不同的相位信息输出模式下,所输出的相位阵列的大小不同;
    相位阵列输出模块,用于按照所述相位信息输出模式,输出与所述像素阵列对应的相位阵列;其中,所述相位阵列包括所述像素阵列中目标像素对应的相位信息;
    对焦控制模块,用于基于所述相位阵列计算所述像素阵列的相位差,并根据所述相位差进行对焦控制。
  31. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器执行如权利要求11至29中任一项所述的对焦控制方法的操作。
  32. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求11至29中任一项所述的对焦控制方法的操作。
  33. 一种计算机程序产品,包括计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行时实现如权利要求11至29中任一项所述的对焦控制方法的操作。
PCT/CN2022/132425 2021-12-27 2022-11-17 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质 WO2023124611A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111617501.0A CN114222047A (zh) 2021-12-27 2021-12-27 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质
CN202111617501.0 2021-12-27

Publications (1)

Publication Number Publication Date
WO2023124611A1 true WO2023124611A1 (zh) 2023-07-06

Family

ID=80706335

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/132425 WO2023124611A1 (zh) 2021-12-27 2022-11-17 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN114222047A (zh)
WO (1) WO2023124611A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114222047A (zh) * 2021-12-27 2022-03-22 Oppo广东移动通信有限公司 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2028843A2 (en) * 2007-08-21 2009-02-25 Ricoh Company, Ltd. Focusing device and imaging apparatus using the same
CN104280803A (zh) * 2013-07-01 2015-01-14 全视科技有限公司 彩色滤光片阵列、彩色滤光片阵列设备及图像传感器
CN213279832U (zh) * 2020-10-09 2021-05-25 Oppo广东移动通信有限公司 图像传感器、相机和终端
CN113286067A (zh) * 2021-05-25 2021-08-20 Oppo广东移动通信有限公司 图像传感器、摄像装置、电子设备及成像方法
CN113660415A (zh) * 2021-08-09 2021-11-16 Oppo广东移动通信有限公司 对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质
CN114222047A (zh) * 2021-12-27 2022-03-22 Oppo广东移动通信有限公司 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112118378A (zh) * 2020-10-09 2020-12-22 Oppo广东移动通信有限公司 图像获取方法及装置、终端和计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2028843A2 (en) * 2007-08-21 2009-02-25 Ricoh Company, Ltd. Focusing device and imaging apparatus using the same
CN104280803A (zh) * 2013-07-01 2015-01-14 全视科技有限公司 彩色滤光片阵列、彩色滤光片阵列设备及图像传感器
CN213279832U (zh) * 2020-10-09 2021-05-25 Oppo广东移动通信有限公司 图像传感器、相机和终端
CN113286067A (zh) * 2021-05-25 2021-08-20 Oppo广东移动通信有限公司 图像传感器、摄像装置、电子设备及成像方法
CN113660415A (zh) * 2021-08-09 2021-11-16 Oppo广东移动通信有限公司 对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质
CN114222047A (zh) * 2021-12-27 2022-03-22 Oppo广东移动通信有限公司 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质

Also Published As

Publication number Publication date
CN114222047A (zh) 2022-03-22

Similar Documents

Publication Publication Date Title
WO2023087908A1 (zh) 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质
US9681057B2 (en) Exposure timing manipulation in a multi-lens camera
TWI504257B (zh) 在產生數位影像中曝光像素群組
US7876363B2 (en) Methods, systems and apparatuses for high-quality green imbalance compensation in images
CN111885308A (zh) 相位检测自动聚焦降噪
US9729806B2 (en) Imaging systems with phase detection pixels
WO2023016144A1 (zh) 对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质
CN111131798B (zh) 图像处理方法、图像处理装置以及摄像装置
EP3476113A1 (en) Multi diode aperture simulation
WO2021093635A1 (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN111741242A (zh) 图像传感器及其操作方法
WO2023035900A1 (zh) 图像传感器、图像生成方法、装置和电子设备
WO2023124611A1 (zh) 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质
WO2023082766A1 (zh) 图像传感器、摄像模组、电子设备、图像生成方法和装置
WO2023124607A1 (zh) 图像生成方法、装置、电子设备和计算机可读存储介质
US11245878B2 (en) Quad color filter array image sensor with aperture simulation and phase detection
US9497427B2 (en) Method and apparatus for image flare mitigation
KR102632474B1 (ko) 이미지 센서의 픽셀 어레이 및 이를 포함하는 이미지 센서
CN110930440B (zh) 图像对齐方法、装置、存储介质及电子设备
US20230254553A1 (en) Image obtaining method and apparatus, terminal, and computer-readable storage medium
WO2023016183A1 (zh) 运动检测方法、装置、电子设备和计算机可读存储介质
US11431898B2 (en) Signal processing device and imaging device
JP6364259B2 (ja) 撮像装置、画像処理方法、及び画像処理プログラム
US20190320128A1 (en) Dynamic range estimation with fast and slow sensor pixels
CN112866554B (zh) 对焦方法和装置、电子设备、计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22913869

Country of ref document: EP

Kind code of ref document: A1