WO2023016144A1 - 对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质 - Google Patents

对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质 Download PDF

Info

Publication number
WO2023016144A1
WO2023016144A1 PCT/CN2022/103859 CN2022103859W WO2023016144A1 WO 2023016144 A1 WO2023016144 A1 WO 2023016144A1 CN 2022103859 W CN2022103859 W CN 2022103859W WO 2023016144 A1 WO2023016144 A1 WO 2023016144A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
pixels
sub
array
phase information
Prior art date
Application number
PCT/CN2022/103859
Other languages
English (en)
French (fr)
Inventor
王文涛
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2023016144A1 publication Critical patent/WO2023016144A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals

Definitions

  • the present application relates to the technical field of image processing, and in particular to a focus control method, device, imaging device, electronic device and computer-readable storage medium.
  • phase detection auto focus English: phase detection auto focus; short: PDAF.
  • the traditional phase detection autofocus is mainly based on the RGB pixel array to calculate the phase difference, and then control the motor based on the phase difference, and then the motor drives the lens to move to a suitable position for focusing, so that the subject is imaged on the focal plane.
  • Embodiments of the present application provide a focus control method, device, imaging device, electronic device, and computer-readable storage medium, which can improve the accuracy of focus control.
  • a focus control method applied to electronic equipment includes an image sensor, the image sensor includes an RGBW pixel array, and the method includes:
  • the target pixel According to the light intensity of the current shooting scene, determine the target pixel corresponding to the light intensity of the current shooting scene from the RGBW pixel array; the target pixel includes W pixels or at least one color pixel in the RGBW pixel array ;
  • Focus control is performed based on the phase difference.
  • An imaging device comprising a lens, a filter and an image sensor, the lens, filter and image sensor are sequentially located on the incident light path;
  • the image sensor includes a plurality of RGBW pixel arrays arranged in an array, and each of the RGBW pixel arrays includes a plurality of pixel units, and each of the pixel units includes W pixels arranged in a diagonal line and another pair of Color pixels arranged in diagonal lines, and each pixel corresponds to a microlens and a plurality of photosensitive elements; each pixel includes a plurality of sub-pixels arranged in an array, and each sub-pixel corresponds to a photosensitive element; the color pixels include R pixels, G pixel, B pixel.
  • a focus control device applied to electronic equipment, the electronic equipment includes an image sensor, the image sensor includes an RGBW pixel array, and the device includes:
  • a target pixel determination module configured to determine a target pixel corresponding to the light intensity of the current shooting scene from the RGBW pixel array according to the light intensity of the current shooting scene; the target pixel includes W in the RGBW pixel array pixel or at least one color pixel;
  • phase difference calculation module configured to acquire phase information of the target pixel, and calculate a phase difference according to the phase information of the target pixel;
  • a focus control module configured to perform focus control based on the phase difference.
  • An electronic device comprising a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor executes the focus control method as described above operate.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the operation of the focus control method as described above is realized.
  • Figure 1 is a schematic diagram of the principle of phase detection autofocus
  • FIG. 2 is a schematic diagram of setting phase detection pixels in pairs among the pixels included in the image sensor
  • Fig. 3 is a partial structural schematic diagram of an RGBW pixel array in an embodiment
  • FIG. 4 is a flowchart of a focus control method in an embodiment
  • FIG. 5 is a schematic diagram of a focus control method in an embodiment
  • FIG. 6 is a flow chart of a method for generating a target image after performing focus control based on a phase difference in an embodiment
  • FIG. 7 is a schematic diagram of a method for generating a target image after performing focus control based on a phase difference in an embodiment
  • FIG. 8 is a flow chart of a method for calculating a phase difference according to the phase information of the target pixel for obtaining the phase information collected by the target pixel in FIG. 4;
  • Fig. 9 is a schematic diagram of a focus control method in another embodiment.
  • FIG. 10 is a flowchart of a method for generating a target image after performing focus control based on a phase difference in another embodiment
  • Fig. 11 is a schematic diagram of a method for generating a target image after performing focus control based on a phase difference in another embodiment
  • FIG. 12 is a flow chart of the phase information collected by the target pixel obtained in FIG. 4, and the phase difference calculation method according to the phase information of the target pixel;
  • Fig. 13 is a schematic diagram of a focus control method in another embodiment
  • Fig. 14 is a schematic diagram of an RGBW pixel array in yet another embodiment
  • Fig. 15 is a schematic diagram of an RGBW pixel array in another embodiment
  • Fig. 16 is a structural block diagram of a focus control device in an embodiment
  • Fig. 17 is a schematic diagram of the internal structure of an electronic device in one embodiment.
  • first, second and the like used in this application may be used to describe various elements herein, but these elements are not limited by these terms. These terms are only used to distinguish one element from another element.
  • a first client could be termed a second client, and, similarly, a second client could be termed a first client, without departing from the scope of the present application.
  • Both the first client and the second client are clients, but they are not the same client.
  • FIG. 1 is a schematic diagram of the principle of phase detection auto focus (PDAF).
  • M1 is the position of the image sensor when the imaging device is in the in-focus state, wherein the in-focus state refers to a state of successful focus.
  • the imaging light g reflected by the object W toward the lens Lens in different directions converges on the image sensor, that is, the imaging light g reflected by the object W toward the lens Lens in different directions is in the image
  • the image is formed at the same position on the sensor, and at this time, the image of the image sensor is clear.
  • M2 and M3 are the possible positions of the image sensor when the imaging device is not in focus.
  • the image sensor when the image sensor is at the M2 position or the M3 position, the reflections from the object W to the lens Lens in different directions
  • the imaging ray g will be imaged at different positions.
  • the imaging light g reflected by the object W toward the lens Lens in different directions is imaged at position A and position B respectively; when the image sensor is at the position M3, the imaging light g reflected by the object W
  • the imaging rays g in different directions of the lens Lens respectively form images at positions C and D, and at this time, the image sensor images are not clear.
  • the difference in position of the image formed by the imaging light entering the lens from different directions in the image sensor can be obtained, for example, as shown in Figure 1, the difference between position A and position B can be obtained, or, Obtain the difference between position C and position D; after obtaining the difference in position of the image formed by the imaging light that enters the lens from different directions in the image sensor, it can be based on the difference and the difference between the lens and the image sensor in the camera Geometric relationship, the defocus distance is obtained.
  • the so-called defocus distance refers to the distance between the current position of the image sensor and the position where the image sensor should be in the in-focus state; the imaging device can focus according to the obtained defocus distance.
  • the calculated PD value is 0.
  • the larger the calculated value the farther the distance from the focal point is, and the smaller the value, the closer the focal point is.
  • phase detection pixel points can be set in pairs among the pixel points included in the image sensor.
  • the image sensor can be provided with phase detection pixel point pairs (hereinafter referred to as pixel point pairs) A, Pixel pair B and pixel pair C.
  • pixel point pairs phase detection pixel point pairs
  • Pixel pair B Pixel pair B
  • pixel pair C phase detection pixel point pairs
  • one phase detection pixel performs left shielding (English: Left Shield)
  • the other phase detection pixel performs right shielding (English: Right Shield).
  • the imaging beam can be divided into left and right parts, and the phase difference can be obtained by comparing the images formed by the left and right parts of the imaging beam.
  • the electronic device includes an image sensor, and the image sensor includes a plurality of RGBW pixel arrays arranged in an array.
  • FIG. 3 is a schematic diagram of an RGBW pixel array. Compared with the general Bayer pattern (Bayer pixel array), the RGBW pattern (pixel array) increases the amount of light passing through and improves the signal-to-noise ratio of the collected signal.
  • Each RGBW pixel array includes a plurality of pixel units Z, as shown in FIG. 3 , each RGBW pixel array includes 4 pixel units Z.
  • the four pixel units Z are respectively a red pixel unit, a green pixel unit, a green pixel unit and a red pixel unit.
  • each RGBW pixel array includes 6 or 8 pixel units Z, which is not limited in this application.
  • Each pixel unit Z includes a diagonally arranged W pixel (white pixel) D and another diagonally arranged color pixel D, and each pixel D corresponds to a microlens and a plurality of photosensitive elements; Each pixel includes a plurality of sub-pixels arranged in an array, and each sub-pixel corresponds to a photosensitive element.
  • the color pixel D includes R pixel (red pixel), G pixel (green pixel) and B pixel (blue pixel).
  • red pixel unit it includes 2 W pixels arranged diagonally and 2 R pixels arranged diagonally; for the green pixel unit, it includes 2 W pixels arranged diagonally and 2 G pixels arranged in another diagonal line; for the blue pixel unit, it includes 2 W pixels arranged in a diagonal line and 2 B pixels arranged in another diagonal line.
  • each W pixel D includes a plurality of sub-pixels d arranged in an array
  • each color pixel D includes a plurality of sub-pixels d arranged in an array
  • each sub-pixel d corresponds to a photosensitive element.
  • the photosensitive element is an element capable of converting light signals into electrical signals.
  • the photosensitive element can be a photodiode.
  • each W pixel D includes 4 sub-pixels d (ie, 4 photodiodes) arranged in an array
  • each color pixel D includes 4 sub-pixels d (ie, 4 photodiodes) arranged in an array.
  • the green pixel D includes four photodiodes (Up-Left PhotoDiode, Up-Right PhotoDiode, Down-Left PhotoDiode and Down-Right PhotoDiode) arranged in an array.
  • photodiodes Up-Left PhotoDiode, Up-Right PhotoDiode, Down-Left PhotoDiode and Down-Right PhotoDiode
  • Fig. 4 is a flowchart of a focus control method in an embodiment.
  • the focus control method in the embodiment of the present application is described by taking an electronic device with a shooting function as an example.
  • Electronic devices can be mobile phones, tablet computers, PDA (Personal Digital Assistant, personal digital assistant), wearable devices (smart bracelets, smart watches, smart glasses, smart gloves, smart socks, smart belts, etc.), VR (virtual reality, Virtual reality) devices, smart homes, driverless cars and other arbitrary terminal devices.
  • the electronic device includes an image sensor, and the image sensor includes an RGBW pixel array.
  • the focusing control method includes operation 420 to operation 460 .
  • Operation 420 Determine the target pixel corresponding to the light intensity of the current shooting scene from the RGBW pixel array according to the light intensity of the current shooting scene; the target pixel includes W pixels or at least one color pixel in the RGBW pixel array.
  • the light intensity of the current shooting scene is not the same, and since the sensitivity of the RGB pixel array is different under different light intensities, under some light intensities, the RGB pixel array calculates The accuracy of the phase difference is low, which in turn leads to a significant decrease in the accuracy of focusing.
  • light intensity is also called light intensity.
  • Light intensity is a physical term, referring to the luminous flux of visible light received per unit area, referred to as illuminance, and the unit is Lux (Lux or lx).
  • Light Intensity is a quantity that indicates how strong or weak the light is and how much the surface area of an object is illuminated. The following table shows the light intensity values under different weather and locations:
  • the RGB pixel array of the image sensor in the traditional method is replaced with an RGBW pixel array. Since the RGBW pixel array is relative to the RGB pixel array, adding a white area to the RGB three-color Color Filter can increase the light transmittance. Because the sensitivity of the W pixel is stronger, the RGBW pixel array can calculate the phase difference more accurately than the RGB pixel array in a scene with weaker light intensity, thereby improving the accuracy of focusing.
  • a target pixel corresponding to the light intensity of the current shooting scene is determined from W pixels or at least one color pixel of the RGBW pixel array.
  • the light intensity of the current shooting scene that is, the light intensity
  • the light intensity of the current shooting scene may be obtained through a sensor on the electronic device.
  • the target pixel corresponding to the light intensity of the current shooting scene is determined from the RGBW pixel array.
  • the W pixel determines the target pixel, so as to obtain more phase information through the W pixel. If the light intensity of the current shooting scene is greater than or equal to the preset threshold of light intensity, at least one of the RGB pixels is determined to be the target pixel. Because at this time, accurate phase information can be obtained through the RGB pixels, and the sensitivity of the W pixels is relatively strong. On the contrary, the W pixels are easy to be saturated, thereby affecting the accuracy of the obtained phase information.
  • phase information of the target pixel is acquired, and a phase difference is calculated according to the phase information of the target pixel.
  • the phase information collected by the sub-pixels included in the target pixel can be read. Then, the four directions of the target pixel in the first direction, the second direction, and the diagonal (including the first diagonal direction and the second diagonal direction perpendicular to the first diagonal direction) can be calculated respectively.
  • the signal difference of the phase signal of the sub-pixel is obtained to obtain the phase difference in these four directions.
  • the first direction is the vertical direction of the RGBW pixel array
  • the second direction is the horizontal direction of the RGBW pixel array
  • the first direction and the second direction are perpendicular to each other.
  • phase differences in other directions of the sub-pixels included in the target pixel can also be calculated, which is not limited in this application.
  • focus control is performed based on the phase difference.
  • the preview image corresponding to the current shooting scene includes texture features in the second direction
  • focus control is performed based on the phase difference in the first direction.
  • the first direction is the vertical direction of the RGBW pixel array
  • the second direction is the horizontal direction of the RGBW pixel array
  • the first direction and the second direction are perpendicular to each other.
  • the preview image includes the texture feature of the second direction, which means that the preview image includes horizontal stripes, which may be solid color and horizontal stripes.
  • focus control is performed based on the phase difference in the vertical direction.
  • the phase difference in the second direction is performed based on the phase difference in the second diagonal direction. If the preview image corresponding to the current shooting scene includes texture features in the first diagonal direction, focus control is performed based on the phase difference in the second diagonal direction, and vice versa. In this way, the phase difference can be accurately collected for texture features in different directions.
  • the accuracy of the phase difference calculated by the RGB pixel array is low under some light intensities.
  • the accuracy of focusing is also greatly reduced.
  • a target pixel corresponding to the light intensity of the current shooting scene is determined from W pixels or at least one color pixel of the RGBW pixel array. Therefore, under different light intensities, if the accuracy of the phase difference calculated based on the phase information of at least one color pixel in the RGBW pixel array is low, choose to calculate the phase difference based on the phase information of the W pixel, Ultimately improving the accuracy of phase focusing. Similarly, if the accuracy of the phase difference calculated based on the phase information of W pixels in the RGBW pixel array is low, choose to calculate the phase difference based on the phase information of at least one color pixel, and finally improve the accuracy of phase focusing.
  • operation 420 determines the target pixel corresponding to the light intensity of the current shooting scene from the RGBW pixel array, including:
  • the target pixel corresponding to the light intensity of the current shooting scene is determined from the RGBW pixel array.
  • the preset threshold value of light intensity is the light intensity threshold value, based on the above-mentioned Table 1-1, the light intensity value 50lx of cloudy indoor and outdoor can be set as the first preset light intensity threshold value (hereinafter referred to as the first preset threshold value) .
  • the present application does not limit the specific value of the first preset threshold.
  • the light intensity of the current shooting scene is less than or equal to the first preset threshold, it means that the light at this time is weak, and then W pixel is determined as the target pixel, so as to obtain more phase information through the W pixel. If the light intensity of the current shooting scene is greater than the first preset threshold, at least one of the RGB pixels is determined to be the target pixel. Because at this time, accurate phase information can be obtained through the RGB pixels, and the sensitivity of the W pixels is relatively strong. On the contrary, the W pixels are easy to be saturated, thereby affecting the accuracy of the obtained phase information.
  • the W pixel when the light is weak, the W pixel is used as the target pixel due to the strong sensitivity of the W pixel, and then the phase difference can be accurately calculated through the W pixel, and then focus control is performed.
  • the light when the light is weak, at least one of the RGB pixels is used as the target pixel, and then the phase difference can be accurately calculated by using at least one of the RGB pixels, and then focus control is performed.
  • accurate focus control can be achieved under different light intensities.
  • the target pixel corresponding to the light intensity of the current shooting scene is determined from the RGBW pixel array, including:
  • At least one color pixel in the RGBW pixel array is used as the target pixel.
  • the light intensity of the current shooting scene is greater than or equal to the first preset threshold, at least one of the RGB pixels is determined to be the target pixel. Because at this time, accurate phase information can be obtained through the RGB pixels, and the sensitivity of the W pixels is relatively strong. On the contrary, the W pixels are easy to be saturated, thereby affecting the accuracy of the obtained phase information.
  • operation 440, acquiring the phase information collected by the target pixel, and calculating the phase difference according to the phase information of the target pixel includes:
  • the second preset threshold is greater than the first preset threshold
  • each pair of sub-pixels is respectively located in two pixels with the same color, and the positions in each pixel are the same.
  • the target pixel is also at least one color pixel in the RGBW pixel array.
  • the phase information of the sub-pixels of each pixel in the target pixel is acquired. That is, to obtain phase information of sub-pixels in at least one color pixel in the RGBW pixel array. Two adjacent pixels of the same color along the diagonal of the pixel array are then determined from the target pixel. Then, for the two pixels of the same color in the target pixel, the phase difference of the target pixel is calculated according to the phase information of each pair of sub-pixels in the two pixels of the same color. Wherein, each pair of sub-pixels is respectively located in two pixels with the same color, and the positions in each pixel are the same.
  • At least one color pixel in the RGBW pixel array is used as the target pixel.
  • any one of R pixel, G pixel, and B pixel may be used as the target pixel, for example, R pixel is used as the target pixel, or G pixel is used as the target pixel, or B pixel is used as the target pixel.
  • all the R pixels, G pixels, and B pixels may be used as target pixels. This is not limited in this application.
  • FIG. 5 it is a schematic diagram of focus control in an embodiment. After reading the phase information of each sub-pixel in the R pixel, G pixel, and B pixel, determine two pixels with the same color from the R pixel, and the two pixels with the same color are adjacent along the diagonal of the pixel array . Each pair of sub-pixels is then determined from the two pixels with the same color, and each pair of sub-pixels is respectively located in the two pixels with the same color, and has the same position in each pixel. The phase information of each pair of sub-pixels is input to the ISP, and the phase difference of the R pixel is calculated through the ISP.
  • the RGBW pixel array is divided into a first pixel unit (R pixel unit), a second pixel unit (G pixel unit), a third pixel unit (G pixel unit) and a fourth pixel unit (B pixel unit).
  • R pixel unit first pixel unit
  • G pixel unit second pixel unit
  • G pixel unit third pixel unit
  • B pixel unit fourth pixel unit
  • the four sub-pixels of the upper left R pixel in the first pixel unit are numbered as sub-pixel 1, sub-pixel 2, sub-pixel 3 and sub-pixel 4 from top to bottom and from left to right.
  • the four sub-pixels of the R pixel in the lower right corner of the first pixel unit are numbered as sub-pixel 5, sub-pixel 6, sub-pixel 7 and sub-pixel 8 from top to bottom and from left to right.
  • the first phase difference of the R pixel is calculated. That is, according to the phase information of sub-pixel 1 and sub-pixel 5 in R pixel, calculate the second phase difference of R pixel; according to the phase information of sub-pixel 2 and sub-pixel 6 in R pixel, calculate the phase difference of R pixel; according to R According to the phase information of sub-pixel 3 and sub-pixel 7 in the pixel, the third phase difference of R pixel is calculated; according to the phase information of sub-pixel 4 and sub-pixel 8 in R pixel, the fourth phase difference of R pixel is calculated.
  • the phase difference of the R pixel can be obtained based on the first phase difference, the second phase difference, the third phase difference and the fourth phase difference of the R pixel.
  • the calculation may be performed by calculating a weighted average value, which is not limited in the present application.
  • the above operation is also performed on the G pixels adjacent to the diagonal of the pixel array in each pixel unit, and the phase differences of the G pixels are respectively obtained.
  • the above operations are also performed on the B pixels adjacent to each pixel unit along the diagonal of the pixel array, and the phase differences of the B pixels are respectively obtained.
  • the distance from the lens to the clear position can be calculated, and then the code value driven by the motor can be calculated according to the distance, and then the Driver IC of the motor converts the code value into a drive current, and The lens is driven to move to the clear position based on the driving current. So far, the process of focusing control is realized.
  • phase information of sub-pixels of each pixel in the target pixel is acquired; the second preset threshold is greater than the first preset threshold. Because the light intensity at this time is relatively large, the phase information collected by the sub-pixels of RGB pixels is relatively accurate, so for the pixels of the same color adjacent to the diagonal of the pixel array in the target pixel, directly according to the same color The phase information of each pair of sub-pixels in the pixel is used to calculate the phase difference of the target pixel. Focusing is then performed based on the phase difference of the target pixel, which ultimately improves the accuracy of phase focusing.
  • the method further includes:
  • Operation 620 controlling exposure of the RGBW pixel array, and obtaining pixel values of all sub-pixels in the RGBW pixel array.
  • the exposure of the RGBW pixel array is controlled, and the pixel values of all sub-pixels in the RGBW pixel array are obtained. That is, the pixel values of the sub-pixels of each R pixel, G pixel, B pixel and W pixel in the RGBW pixel array are acquired.
  • FIG. 7 it is a schematic diagram of generating a target image in an embodiment. Wherein, the pixel values of the sub-pixels in the RGBW pixel array are obtained to form the original RAW image 702 .
  • the pixel value of the sub-pixel of the color pixel is obtained from the pixel value of the sub-pixel, and an interpolation operation is performed on the pixel value of the sub-pixel of the color pixel to generate a Bayer array image.
  • the pixel values of sub-pixels of R pixels, G pixels, and B pixels are acquired from the original RAW image 702 to generate a RAW image 704 corresponding to the RGB pixels.
  • a Bayer array image 706 is generated by interpolating pixel values of sub-pixels of R pixels, G pixels, and B pixels in the RAW image 704 corresponding to the RGB pixels.
  • the Bayer array image is a 4 ⁇ 4 array, which is composed of 8 green, 4 blue and 4 red pixels. When converting a grayscale image into a color image, it will perform 9 operations in a 2 ⁇ 2 matrix, and finally Generate a color image.
  • the Remosaic interpolation algorithm can be used for interpolation processing, wherein the Remosaic interpolation algorithm is mainly through pixel exchange, or through the connection between the pixel and the surrounding related pixels, according to the distance between the pixel and the surrounding related pixels. weight ratio. Then, pixel values of surrounding related pixels are generated based on the weight ratio and the pixel value of this pixel.
  • the pixel values of sub-pixels of W pixels are acquired from the pixel values of sub-pixels, and an interpolation operation is performed on the pixel values of sub-pixels of W pixels to generate a W-pixel image.
  • the pixel values of sub-pixels of W pixels are acquired from the original RAW image 702 to generate a RAW image 708 corresponding to the W pixels.
  • An interpolation operation is performed on the pixel values of the sub-pixels of the W pixels in the RAW image 708 corresponding to the W pixels to generate a W-pixel image 710 .
  • the Bayer array image is fused with the W pixel image to generate a target image.
  • the Bayer array image 706 is fused with the W pixel image 710 to generate a target image 712 .
  • the pixel value of each sub-pixel in the Bayer array image 706 can be directly combined with the pixel value of each sub-pixel in the W pixel image 710 to generate the target image The pixel value of the sub-pixel at the corresponding position in 712 .
  • the phase information of the sub-pixels of each pixel in the target pixel is acquired. Because the light intensity at this time is relatively large, the phase information collected by the sub-pixels of the RGB pixel is more accurate, so for the adjacent pixels of the same color in the target pixel, directly according to the two pixels in the same position in the same color pixel The phase information of sub-pixels is used to calculate the phase difference of the target pixel. Focusing is then performed based on the phase difference of the target pixel, which ultimately improves the accuracy of phase focusing.
  • the exposure of the RGBW pixel array is controlled, and the pixel values of all sub-pixels in the RGBW pixel array are obtained. Because the light intensity at this time is relatively large, the signal-to-noise ratio of the pixel value of each sub-pixel is relatively large. Therefore, the pixel value of the sub-pixel of the color pixel is obtained from the pixel value of the sub-pixel, and the pixel value of the sub-pixel of the color pixel is directly interpolated to generate a Bayer array image.
  • the pixel values of the sub-pixels of the W pixels are directly obtained from the pixel values of the sub-pixels, and an interpolation operation is performed on the pixel values of the sub-pixels of the W pixels to generate a W-pixel image.
  • the Bayer array image is fused with the W pixel image to generate the target image. Since the light intensity is relatively high at this time, the pixel value of each sub-pixel is directly interpolated, which can improve the resolution of the final generated target image while ensuring a relatively high signal-to-noise ratio.
  • each RGBW pixel array includes a plurality of pixel units, each pixel unit includes a plurality of pixels, and each pixel includes a plurality of sub-pixels; operation 440, acquiring the phase collected by the target pixel Information, calculate the phase difference according to the phase information of the target pixel, including:
  • the target pixel is also at least one color pixel in the RGBW pixel array. Then, phase information of sub-pixels of each pixel in the target pixel is acquired. That is, to obtain phase information of sub-pixels in at least one color pixel in the RGBW pixel array.
  • At least one color pixel in the RGBW pixel array is used as the target pixel.
  • any one of R pixel, G pixel, and B pixel may be used as the target pixel, for example, R pixel is used as the target pixel, or G pixel is used as the target pixel, or B pixel is used as the target pixel.
  • all the R pixels, G pixels, and B pixels may be used as target pixels. This is not limited in this application.
  • Operation 840 for each pixel unit, combine the phase information of the sub-pixels in the same area in the pixel with the same color in the first direction in the pixels with the same color, to obtain the pixels with the same color in each pixel unit in the first direction Combining upward phase information, calculating the phase difference in the first direction according to the combined phase information of each pixel in the first direction; or,
  • the RGBW pixel array includes 4 pixel units. For each pixel unit, firstly, determine the sub-pixels in the same region in the pixels with the same color in the first direction.
  • the first direction is the vertical direction of the RGBW pixel array
  • the second direction is the horizontal direction of the RGBW pixel array
  • the first direction and the second direction are perpendicular to each other.
  • phase differences in other directions of the sub-pixels included in the target pixel can also be calculated, which is not limited in this application.
  • FIG. 9 it is a schematic diagram of focus control in an embodiment.
  • the R pixel unit in the RGBW pixel array 920 firstly, determine the sub-pixels in the same area of the R pixel in the first direction.
  • the four sub-pixels of the R pixel in the upper left corner of the first pixel unit (R pixel unit) are numbered as sub-pixel 1, sub-pixel 2, sub-pixel 3 and sub-pixel from top to bottom and from left to right 4 (refer to Figure 5).
  • the four sub-pixels of the R pixel in the lower right corner of the first pixel unit (R pixel unit) are numbered as sub-pixel 5, sub-pixel 6, sub-pixel 7, and sub-pixel 8 from top to bottom and from left to right ( Refer to Figure 5).
  • the sub-pixels of the R pixel in the upper left corner in the same area in the R pixel in the first direction are sub-pixels 1 and 3, and the sub-pixels in the same area in the R pixel in the lower right corner are determined in the first direction.
  • the pixels are sub-pixel 4 and sub-pixel 6 .
  • the phase difference in the first direction is calculated according to the combined phase information of each pixel in the first direction. For example, for the combined RGB pixel array 940, according to the combined phase information of two R pixels in the R pixel unit in the first direction, the phase difference of the R pixels in the first direction is calculated.
  • the above operation is also performed for the G pixel in each pixel unit, and the G pixel is calculated according to the combined phase information of the two G pixels in the first direction in each G pixel unit phase difference in the first direction.
  • the above operation is also performed for the B pixel in each pixel unit, and the B pixel is calculated according to the combined phase information of the two B pixels in each B pixel unit in the first direction. phase difference in the first direction.
  • R pixels, G pixels, and B pixels are used as target pixels, or all R pixels, G pixels, and B pixels are used as target pixels. Then select the corresponding phase difference from the phase difference of the R pixel in the first direction, the phase difference of the G pixel in the first direction and the phase difference of the B pixel in the first direction obtained from the above calculation, and combine them to generate the phase in the first direction Difference.
  • Operation 860 for each pixel unit, combine the phase information of the sub-pixels in the same region in the pixel with the same color in the second direction, to obtain the pixels with the same color in each pixel unit in the second direction
  • the phase difference in the second direction is calculated according to the combined phase information of each pixel in the second direction; the first direction and the second direction are perpendicular to each other.
  • the R pixel unit in the RGBW pixel array firstly, determine the sub-pixels in the same area of the R pixel in the same color pixel in the second direction.
  • the four sub-pixels of the R pixel in the upper left corner of the first pixel unit (R pixel unit) are numbered as sub-pixel 1, sub-pixel 2, sub-pixel 3, and sub-pixel from top to bottom and from left to right. 4.
  • the four sub-pixels of the R pixel in the lower right corner of the first pixel unit (R pixel unit) are numbered as sub-pixel 5, sub-pixel 6, sub-pixel 7 and sub-pixel 8 from top to bottom and from left to right.
  • the sub-pixels of the R pixel in the upper left corner in the same area in the R pixel in the second direction are sub-pixels 1 and 2, and the sub-pixels in the same area in the R pixel in the lower right corner are determined in the second direction.
  • the pixels are sub-pixel 4, sub-pixel 5.
  • the phase difference in the second direction is calculated according to the combined phase information of each pixel in the second direction. For example, for the combined RGB pixel array 940, according to the combined phase information of two R pixels in the R pixel unit in the second direction, the phase difference of the R pixels in the second direction is calculated.
  • the above operation is also performed for the G pixel in each pixel unit, and the G pixel is calculated according to the combined phase information of the two G pixels in each G pixel unit in the second direction. phase difference in the second direction.
  • the above operation is also performed on the B pixel in each pixel unit, and the B pixel is calculated according to the combined phase information of the two B pixels in each B pixel unit in the second direction. phase difference in the second direction.
  • any two of R pixels, G pixels, and B pixels are used as target pixels, or all R pixels, G pixels, and B pixels are used as target pixels. Then select the corresponding phase difference from the phase difference of the R pixel in the second direction, the phase difference of the G pixel in the second direction and the phase difference of the B pixel in the second direction obtained from the above calculation, and combine them to generate the phase in the second direction Difference.
  • the phase information of the sub-pixels of each pixel in the target pixel is acquired. Because the light intensity at this time is slightly weak, the phase information collected by the sub-pixels of the RGB pixels is not very accurate, and some RGB pixels may not collect phase information. Therefore, for each pixel unit, the phase information of the sub-pixels in the same area of the pixel with the same color in the first direction/second direction in the pixels with the same color are combined to obtain the pixels with the same color in each pixel unit.
  • phase information in the first direction/second direction calculating a phase difference in the first direction/second direction according to the combined phase information of each pixel in the first direction/second direction.
  • the accuracy of the acquired phase information can be improved, and the signal-to-noise ratio of the phase information can be improved. Focusing is then performed based on the phase difference between the first direction and the second direction, which finally improves the accuracy of phase focusing.
  • the method further includes:
  • Operation 1010 controlling the exposure of the RGBW pixel array, and obtaining the pixel values of the sub-pixels in the RGBW pixel array.
  • the exposure of the RGBW pixel array is controlled, and the pixel values of the sub-pixels in the RGBW pixel array are acquired. That is, the pixel values of the sub-pixels of each R pixel, G pixel, B pixel and W pixel in the RGBW pixel array are acquired.
  • FIG. 11 it is a schematic diagram of generating a target image in an embodiment. Wherein, the pixel values of the sub-pixels in the RGBW pixel array are obtained to form the original RAW image 1102 .
  • the pixel value of the color pixel is calculated according to the pixel value of the sub-pixel of each color pixel.
  • the pixel values of sub-pixels of R pixels, G pixels, and B pixels are obtained from the original RAW image 1102 to generate a RAW image 1104 corresponding to the RGB pixels.
  • a merged RAW image 1106 corresponding to the RGB pixels is generated.
  • an interpolation operation is performed on the pixel values of the color pixels to generate a Bayer array image.
  • the interpolation operation is performed on the pixel values of R pixels, G pixels, and B pixels in the merged RAW image 1106 corresponding to the RGB pixels to generate a Bayer array image 1108 .
  • the Remosaic interpolation algorithm can be used for interpolation processing, wherein the Remosaic interpolation algorithm is mainly through pixel exchange, or through the connection between the pixel and the surrounding related pixels, according to the distance between the pixel and the surrounding related pixels. weight ratio. Then, pixel values of surrounding related pixels are generated based on the weight ratio and the pixel value of this pixel.
  • Operation 1070 Calculate the pixel value of the W pixel according to the pixel value of the sub-pixel of the W pixel, and perform an interpolation operation on the pixel value of the W pixel to generate a W pixel image.
  • the pixel values of the sub-pixels of the W pixels are obtained from the pixel values of the sub-pixels, and the pixel values of the sub-pixels of the W pixels are combined to obtain the pixel values of the W pixels.
  • the pixel values of the sub-pixels of W pixels are acquired from the original RAW image 1102 to generate a RAW image 1110 corresponding to the W pixels. Combine the pixel values of the sub-pixels of the W pixels to obtain the pixel values of the W pixels, and generate a combined W image 1112 corresponding to the W pixels. An interpolation operation is performed on the pixel values of the W pixels in the merged RAW image 1212 corresponding to the W pixels to generate a W pixel image 1114 .
  • the Bayer array image is fused with the W pixel image to generate a target image.
  • the Bayer array image 1108 is fused with the W pixel image 1114 to generate a target image 1116 .
  • the pixel value of each pixel in the Bayer array image 1108 and the pixel value of each pixel in the W pixel image 1114 can be directly combined to generate the target image The pixel value of the pixel at the corresponding position in 1116.
  • the light intensity of the current shooting scene exceeds the first preset threshold and is not greater than the second preset threshold, after focusing according to the focusing method in the above embodiment, control the exposure of the RGBW pixel array to obtain the The pixel value of the subpixel in the RGBW pixel array. Because the light intensity at this time is slightly weaker, the signal-to-noise ratio of the pixel value of each sub-pixel is small. The pixel values of the sub-pixels of each color pixel are combined to generate the pixel value of the color pixel, and the signal-to-noise ratio of the pixel value of the color pixel is improved.
  • an interpolation operation is performed on the pixel values of the color pixels to generate a Bayer array image.
  • the pixel values of the sub-pixels of each W pixel are combined to generate the pixel value of the W pixel, and the signal-to-noise ratio of the pixel value of the W pixel is improved.
  • the Bayer array image is fused with the W pixel image to generate the target image.
  • the signal corresponding to the collected pixel values is increased, thus improving the signal-to-noise ratio of the target image.
  • the color pixels include R pixels, G pixels, and B pixels; according to the pixel values of the sub-pixels of each color pixel, calculating the pixel value of the color pixel includes:
  • the pixel value of the R pixel when combining the pixel values of the sub-pixels of the R pixel to obtain the pixel value of the R pixel, the pixel value of the R pixel may be generated by directly calculating a weighted average of the pixel values of the sub-pixels of the R pixel. Similarly, the pixel value of the G pixel and the pixel value of the B pixel are calculated. The pixel values of the sub-pixels of each color pixel are combined to generate the pixel value of the color pixel, and the signal-to-noise ratio of the pixel value of the color pixel is improved.
  • calculating the pixel value of the W pixel according to the pixel value of the sub-pixel of the W pixel includes:
  • the pixel values of the sub-pixels of the W pixels are obtained from the pixel values of the sub-pixels, and the pixel values of the sub-pixels of the W pixels are combined to obtain the pixel values of the W pixels.
  • the pixel values of the sub-pixels of the W pixels are combined to obtain the pixel values of the W pixels, and the pixel value of the W pixels may be generated by directly calculating a weighted average of the pixel values of the sub-pixels of the W pixels.
  • the pixel values of the sub-pixels of each W pixel are combined to generate the pixel value of the W pixel, and the signal-to-noise ratio of the pixel value of the W pixel is improved.
  • the target pixel corresponding to the light intensity of the current shooting scene is determined from the RGBW pixel array, including:
  • the W pixel in the RGBW pixel array is used as the target pixel.
  • the W pixel is determined as the target pixel so that more phase information can be obtained through the W pixel.
  • each RGBW pixel array includes a plurality of pixel units, as shown in Figure 12, operation 440, acquires the phase information collected by the target pixel, and calculates the phase difference according to the phase information of the target pixel, including:
  • Operation 1220 for the W pixel, acquire phase information of each sub-pixel in the W pixel.
  • FIG. 13 it is a schematic diagram of focus control in an embodiment. If the light intensity of the current shooting scene is less than or equal to the first preset threshold, because the light intensity at this time is very weak, the W pixel in the RGBW pixel array 1320 is used as the target pixel. Furthermore, the phase information of the sub-pixels in the W pixels in the RGBW pixel array is acquired to generate the W pixel array 1320 .
  • Operation 1240 for each pixel unit, combine the phase information of the sub-pixels in the same area in the first direction in the W pixel to obtain the combined phase information of the W pixel in the first direction, according to the W pixel in the first direction Combining upward phase information to calculate a phase difference in a first direction; or,
  • the W pixel unit in the W pixel array 1320 first, determine the sub-pixels in the same area as the W pixel in the first direction. For example, the four sub-pixels of the W pixel in the upper right corner of the first pixel unit (R pixel unit) are numbered as sub-pixel 1, sub-pixel 2, sub-pixel 3, and sub-pixel from top to bottom and from left to right. 4. The four sub-pixels of the W pixel in the lower left corner of the first pixel unit (R pixel unit) are numbered as sub-pixel 5, sub-pixel 6, sub-pixel 7 and sub-pixel 8 from top to bottom and from left to right.
  • phase information of the sub-pixels of the W pixel in the same area in the first direction in the R pixel unit is combined.
  • the phase difference in the first direction is calculated according to the combined phase information of each pixel in the first direction.
  • the phase difference of the W pixel in the first direction is calculated according to the combined phase information of the two W pixels in the R pixel unit in the first direction.
  • the combined phase information of the W pixels in the combined W pixel array 1340 may be continued to be combined again to generate the W pixel array 1360 .
  • the phase difference of the W pixel in the first direction is calculated.
  • Operation 1260 for each pixel unit, combine the phase information of the sub-pixels in the same area in the second direction in the W pixel to obtain the combined phase information of the W pixel in the second direction, according to the W pixel in the second direction
  • the phase difference in the second direction is calculated by combining the phase information on the above; the first direction and the second direction are perpendicular to each other.
  • the W pixel unit in the W pixel array 1320 first, determine the sub-pixels in the same region of the W pixel in the second direction.
  • the four sub-pixels of the W pixel in the upper right corner of the first pixel unit (R pixel unit) are numbered as sub-pixel 1, sub-pixel 2, sub-pixel 3, and sub-pixel from top to bottom and from left to right. 4.
  • the four sub-pixels of the W pixel in the lower left corner of the first pixel unit (R pixel unit) are numbered as sub-pixel 5, sub-pixel 6, sub-pixel 7 and sub-pixel 8 from top to bottom and from left to right.
  • it is determined that the W pixel in the upper right corner is in the same area in the second direction as sub-pixel 1 and sub-pixel 2
  • the sub-pixel in the lower left corner is determined in the same area in the second direction as sub-pixel 4, Subpixel 5.
  • phase information of the sub-pixels of the W pixel in the same area in the second direction in the R pixel unit is combined.
  • the phase information and the lower phase information are combined to obtain combined phase information of each pixel in the second direction, and a combined W pixel array is generated.
  • the phase difference in the second direction is calculated according to the combined phase information of each pixel in the second direction.
  • the phase difference of the W pixel in the second direction is calculated according to the combined phase information of the two W pixels in the R pixel unit in the second direction.
  • the combined phase information of the W pixels in the combined W pixel array 1340 may be continued to be combined again to generate the W pixel array 1360 .
  • the phase difference of the W pixel in the first direction is calculated.
  • the W pixel in the RGBW pixel array is used as the target pixel. Because the light intensity at this time is very weak, the W pixel in the RGBW pixel array is used as the target pixel.
  • the phase information of the sub-pixels in the same area in the first direction/second direction in the W pixel is combined to obtain the combined phase information of the W pixel in the first direction/second direction, according to W
  • Combining the phase information of the pixels in the first direction/the second direction calculates the phase difference in the first direction/the second direction, and the first direction and the second direction are perpendicular to each other.
  • the accuracy of the acquired phase information can be improved, and the signal-to-noise ratio of the phase information can be improved. Focusing is then performed based on the phase difference between the first direction and the second direction, which finally improves the accuracy of phase focusing.
  • a plurality of photosensitive elements corresponding to a pixel are arranged in a center-symmetric manner.
  • FIG. 3 it is a schematic structural diagram of a part of the image sensor in an embodiment.
  • the image sensor includes a plurality of RGBW pixel arrays arranged in an array.
  • FIG. 3 is a schematic diagram of an RGBW pixel array.
  • Each RGBW pixel array includes a plurality of pixel units Z, as shown in FIG. 3 , each RGBW pixel array includes 4 pixel units Z.
  • the four pixel units Z are respectively a red pixel unit, a green pixel unit, a green pixel unit and a red pixel unit.
  • Each pixel unit Z includes W pixels D and color pixels D arranged in a diagonal line, and each pixel D corresponds to a microlens.
  • the color pixel D includes R pixel, G pixel and B pixel. Specifically, for the red pixel unit, it includes 2 W pixels and 2 R pixels arranged diagonally; for the green pixel unit, it includes 2 W pixels and 2 G pixels arranged diagonally; for the blue The pixel unit includes 2 W pixels and 2 B pixels arranged diagonally.
  • each W pixel D includes a plurality of sub-pixels d arranged in an array
  • each color pixel D includes a plurality of sub-pixels d arranged in an array
  • each sub-pixel d corresponds to a photosensitive element. Since the plurality of photosensitive elements corresponding to the pixels are arranged in a center-symmetric manner, the W pixel, the R pixel, the G pixel and the B pixel include a plurality of sub-pixels arranged in a center-symmetric manner. That is, the photosensitive elements corresponding to these sub-pixels may be arranged symmetrically to the center in various arrangements or in various shapes, and are not limited to the arrangement in a square as shown in FIG. 3 .
  • the photosensitive elements corresponding to the sub-pixels may be arranged symmetrically to the center in various arrangements or shapes, and each sub-pixel d corresponds to a photosensitive element. Therefore, the W pixel, the R pixel, the G pixel, and the B pixel include a plurality of sub-pixels arranged in a center-symmetric manner. A variety of arrangements are provided for the sub-pixels, so the sub-pixels can collect diverse phase information, thereby improving the accuracy of subsequent focusing.
  • the plurality of photosensitive elements corresponding to the pixels are arranged symmetrically in a trapezoidal manner.
  • each RGBW pixel array includes 4 pixel units Z.
  • the four pixel units Z are respectively a red pixel unit, a green pixel unit, a green pixel unit and a red pixel unit.
  • Each pixel unit Z includes W pixels D and color pixels D arranged in a diagonal line, and each pixel D corresponds to a microlens.
  • the color pixel D includes R pixel, G pixel and B pixel.
  • Each W pixel includes a plurality of sub-pixels d arranged in an array, and these sub-pixels are arranged centrally symmetrically in a trapezoidal manner.
  • each R pixel includes a plurality of sub-pixels d arranged in an array, and these sub-pixels are arranged symmetrically to the center in a trapezoidal manner.
  • Each G pixel includes a plurality of sub-pixels d arranged in an array, and these sub-pixels are arranged centrally symmetrically in a trapezoidal manner.
  • Each B pixel includes a plurality of sub-pixels d arranged in an array, and these sub-pixels are arranged symmetrically to the center in a trapezoidal manner.
  • each sub-pixel d corresponds to a photosensitive element.
  • the photosensitive element may be a photodiode (PD, PhotoDiode).
  • PD photodiode
  • FIG. 14 both the left PD and the right PD have a trapezoidal structure, and the left PD and the right PD are arranged symmetrically about the center.
  • the W pixels, R pixels, G pixels, and B pixels in the RGBW pixel array may also be combined in a variety of different arrangements, which is not specifically limited in this application.
  • the photosensitive elements corresponding to the sub-pixels may be arranged symmetrically to the center in various arrangements or shapes, and each sub-pixel d corresponds to a photosensitive element. Therefore, the W pixel, the R pixel, the G pixel, and the B pixel include a plurality of sub-pixels that are symmetrically arranged in a trapezoidal manner. A variety of arrangements are provided for the sub-pixels, so the sub-pixels can collect diverse phase information, thereby improving the accuracy of subsequent focusing.
  • the plurality of photosensitive elements corresponding to the pixels are arranged symmetrically about the center in an L-shape.
  • each RGBW pixel array includes 4 pixel units Z.
  • the four pixel units Z are respectively a red pixel unit, a green pixel unit, a green pixel unit and a red pixel unit.
  • Each pixel unit Z includes W pixels D and color pixels D arranged in a diagonal line, and each pixel D corresponds to a microlens.
  • the color pixel D includes R pixel, G pixel and B pixel.
  • Each W pixel includes a plurality of sub-pixels d arranged in an array, and these sub-pixels are arranged symmetrically to the center in an L-shaped manner.
  • each R pixel includes a plurality of sub-pixels d arranged in an array, and these sub-pixels are arranged symmetrically to the center in an L-shape.
  • Each G pixel includes a plurality of sub-pixels d arranged in an array, and these sub-pixels are arranged symmetrically about the center in an L-shape.
  • Each B pixel includes a plurality of sub-pixels d arranged in an array, and these sub-pixels are arranged symmetrically to the center in an L-shaped manner.
  • each sub-pixel d corresponds to a photosensitive element.
  • the photosensitive element may be a photodiode (PD, PhotoDiode).
  • PD photodiode
  • FIG. 15 both the left PD and the right PD have an L-shaped structure, and the left PD and the right PD are symmetrically arranged in the center.
  • the W pixels, R pixels, G pixels, and B pixels in the RGBW pixel array may also be combined in a variety of different arrangements, which is not specifically limited in this application.
  • the photosensitive elements corresponding to the sub-pixels may be arranged symmetrically to the center in various arrangements or shapes, and each sub-pixel d corresponds to a photosensitive element. Therefore, the W pixel, the R pixel, the G pixel, and the B pixel include a plurality of sub-pixels that are symmetrically arranged in an L-shape. A variety of arrangements are provided for the sub-pixels, so the sub-pixels can collect diverse phase information, thereby improving the accuracy of subsequent focusing.
  • a focus control device 1600 is provided, which is applied to an electronic device, the electronic device includes an image sensor, the image sensor includes an RGBW pixel array, and the device includes:
  • the target pixel determination module 1620 is configured to determine a target pixel corresponding to the light intensity of the current shooting scene from the RGBW pixel array according to the light intensity of the current shooting scene; the target pixel includes W pixels or at least one color pixel in the RGBW pixel array ;
  • a phase difference calculation module 1640 configured to acquire phase information of the target pixel, and calculate the phase difference according to the phase information of the target pixel;
  • a focus control module 1660 configured to perform focus control based on the phase difference.
  • the target pixel determining module 1620 is further configured to determine the target pixel corresponding to the light intensity of the current shooting scene from the RGBW pixel array according to the light intensity of the current shooting scene and the preset threshold of light intensity.
  • the target pixel determination module 1620 includes:
  • the first target pixel determining unit is configured to use at least one color pixel in the RGBW pixel array as the target pixel when the light intensity of the current shooting scene exceeds a first preset threshold.
  • the phase difference calculation module 1640 is configured to acquire the phase information of the sub-pixels of each pixel in the target pixel if the light intensity of the current shooting scene exceeds a second preset threshold; the second preset threshold is greater than the second preset threshold A preset threshold; for two pixels with the same color in the target pixel, calculate the phase difference of the target pixel according to the phase information of each pair of sub-pixels in the two pixels with the same color; wherein, the two pixels with the same color along the pixel The diagonal lines of the array are adjacent, and each pair of sub-pixels is respectively located in two pixels with the same color, and the positions in each pixel are the same.
  • a focus control device is provided, and the device further includes:
  • the first target image generation module is used to control the exposure of the RGBW pixel array, and obtains the pixel value of the sub-pixel in the RGBW pixel array; obtains the pixel value of the sub-pixel of the color pixel from the pixel value of the sub-pixel, and obtains the pixel value of the sub-pixel of the color pixel.
  • the pixel value is interpolated to generate a Bayer array image; the pixel value of the sub-pixel of the W pixel is obtained from the pixel value of the sub-pixel, and the pixel value of the sub-pixel of the W pixel is interpolated to generate a W pixel image; the Bayer array image is Fusion with the W pixel image to generate the target image.
  • each RGBW pixel array includes a plurality of pixel units, each pixel unit includes a plurality of pixels, and each pixel includes a plurality of sub-pixels;
  • the phase difference calculation module 1640 includes:
  • the first phase difference calculation unit is configured to: if the light intensity of the current shooting scene exceeds the first preset threshold and is not greater than the second preset threshold, acquire the phase information of the sub-pixels of each pixel in the target pixel;
  • phase information of the sub-pixels in the same area in the same color pixel in the first direction among the pixels with the same color is combined to obtain the combination of the pixels with the same color in each pixel unit in the first direction
  • Phase information, calculating the phase difference in the first direction according to the combined phase information of each pixel in the first direction is combined to obtain the combination of the pixels with the same color in each pixel unit in the first direction
  • the phase information of the sub-pixels in the same area in the same color pixel in the second direction among the pixels with the same color is combined to obtain the combination of the pixels with the same color in each pixel unit in the second direction phase information, calculating a phase difference in the second direction according to combined phase information of each pixel in the second direction; the first direction and the second direction are perpendicular to each other.
  • the focus control module 1660 is further configured to perform focus control based on the phase difference in the first direction if the preview image corresponding to the current shooting scene includes texture features in the second direction; or, if the current shooting scene corresponds to If the preview image includes texture features in the first direction, focus control is performed based on the phase difference in the second direction.
  • a focus control device is provided, and the device further includes:
  • the second target image generation module is used to control the exposure of the RGBW pixel array, and obtain the pixel value of the sub-pixel in the RGBW pixel array;
  • the Bayer array image is fused with the W pixel image to generate the target image.
  • the color pixels include R pixels, G pixels, and B pixels; the second target image generation module is also used to obtain the pixel values of the sub-pixels of the R pixels, G pixels, and B pixels from the pixel values of the sub-pixels , combine the pixel values of the sub-pixels of the R pixel to obtain the pixel value of the R pixel, combine the pixel values of the sub-pixels of the G pixel to obtain the pixel value of the G pixel, and combine the pixel values of the sub-pixels of the B pixel to obtain the pixel value of the B pixel value.
  • the second target image generating module is further configured to obtain the pixel value of the sub-pixel of W pixel from the pixel value of the sub-pixel, and combine the pixel values of the sub-pixel of W pixel to obtain the pixel value of W pixel .
  • the target pixel determination module 1620 includes:
  • the second target pixel determining unit is configured to use the W pixel in the RGBW pixel array as the target pixel if the light intensity of the current shooting scene is less than or equal to the first preset threshold.
  • the phase difference calculation module 1640 includes:
  • the second phase difference calculation unit is used for obtaining the phase information of each sub-pixel in the W pixel for the W pixel;
  • phase information For each pixel unit, combine the phase information of the sub-pixels in the same area in the first direction in the W pixel to obtain the combined phase information of the W pixel in the first direction, according to the combination of the W pixel in the first direction
  • the phase information calculates the phase difference in the first direction
  • phase information For each pixel unit, combine the phase information of the sub-pixels in the same area in the second direction in the W pixel to obtain the combined phase information of the W pixel in the second direction, according to the combination of the W pixel in the second direction
  • the phase information calculates the phase difference in the second direction; the first direction and the second direction are perpendicular to each other.
  • the image sensor includes a plurality of RGBW pixel arrays arranged in an array, each RGBW pixel array includes a plurality of pixel units, and each pixel unit includes W pixels arranged in a diagonal line and another pair of Color pixels arranged in diagonal lines, and each pixel corresponds to a microlens and a plurality of photosensitive elements; each pixel includes a plurality of sub-pixels arranged in an array, and each sub-pixel corresponds to a photosensitive element; the color pixels include R pixels, G pixels, B pixels.
  • an imaging device including a lens, an optical filter, and an image sensor, wherein the lens, the optical filter, and the image sensor are sequentially located on the incident light path;
  • the image sensor includes a plurality of RGBW pixel arrays arranged in an array, each RGBW pixel array includes a plurality of pixel units, and each pixel unit includes W pixels arranged in a diagonal line and color pixels arranged in another diagonal line , and each pixel corresponds to a microlens and a plurality of photosensitive elements; each pixel includes a plurality of sub-pixels arranged in an array, and each sub-pixel corresponds to a photosensitive element; color pixels include R pixels, G pixels, and B pixels.
  • a plurality of photosensitive elements corresponding to a pixel are arranged in a center-symmetric manner.
  • each module in the above-mentioned focus control device is only for illustration. In other embodiments, the focus control device can be divided into different modules according to needs, so as to complete all or part of the functions of the above-mentioned focus control device.
  • Each module in the above-mentioned focusing control device can be fully or partially realized by software, hardware and a combination thereof.
  • the above-mentioned modules can be embedded in or independent of the processor in the computer device in the form of hardware, and can also be stored in the memory of the computer device in the form of software, so that the processor can invoke and execute the corresponding operations of the above-mentioned modules.
  • Fig. 17 is a schematic diagram of the internal structure of an electronic device in one embodiment.
  • the electronic device can be any terminal device such as mobile phone, tablet computer, notebook computer, desktop computer, PDA (Personal Digital Assistant, personal digital assistant), POS (Point of Sales, sales terminal), vehicle-mounted computer, wearable device, etc.
  • the electronic device includes a processor and memory connected by a system bus.
  • the processor may include one or more processing units.
  • the processor can be a CPU (Central Processing Unit, central processing unit) or a DSP (Digital Signal Processing, digital signal processor), etc.
  • the memory may include non-volatile storage media and internal memory. Nonvolatile storage media store operating systems and computer programs.
  • the computer program can be executed by a processor, so as to implement a focus control X method provided in each of the following embodiments.
  • the internal memory provides a high-speed running environment for the operating system computer program in the non-volatile storage medium.
  • each module in the focus control device provided in the embodiment of the present application may be in the form of a computer program.
  • the computer program can run on a terminal or a server.
  • the program modules constituted by the computer program can be stored in the memory of the electronic device.
  • the operations of the methods described in the embodiments of the present application are realized.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform operations of the focus control method.
  • the embodiment of the present application also provides a computer program product including instructions, which, when running on a computer, causes the computer to execute the focusing control method.
  • Non-volatile memory can include ROM (Read-Only Memory, read-only memory), PROM (Programmable Read-only Memory, programmable read-only memory), EPROM (Erasable Programmable Read-Only Memory, erasable programmable read-only memory) Memory), EEPROM (Electrically Erasable Programmable Read-only Memory, Electrically Erasable Programmable Read-only Memory) or flash memory.
  • Volatile memory can include RAM (Random Access Memory, Random Access Memory), which is used as external cache memory.
  • RAM is available in various forms, such as SRAM (Static Random Access Memory, static random access memory), DRAM (Dynamic Random Access Memory, dynamic random access memory), SDRAM (Synchronous Dynamic Random Access Memory , synchronous dynamic random access memory), double data rate DDR SDRAM (Double Data Rate Synchronous Dynamic Random Access memory, double data rate synchronous dynamic random access memory), ESDRAM (Enhanced Synchronous Dynamic Random Access memory, enhanced synchronous dynamic random access memory access memory), SLDRAM (Sync Link Dynamic Random Access Memory, synchronous link dynamic random access memory), RDRAM (Rambus Dynamic Random Access Memory, bus dynamic random access memory), DRDRAM (Direct Rambus Dynamic Random Access Memory, interface dynamic random access memory) memory).
  • SRAM Static Random Access Memory, static random access memory
  • DRAM Dynanamic Random Access Memory, dynamic random access memory
  • SDRAM Synchronous Dynamic Random Access Memory , synchronous dynamic random access memory
  • double data rate DDR SDRAM Double Data Rate Synchronous Dynamic Random Access memory, double

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Automatic Focus Adjustment (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

本申请涉及一种对焦控制方法,应用于电子设备,电子设备包括图像传感器,图像传感器包括RGBW像素阵列,该方法包括:根据当前拍摄场景的光线强度,从RGBW像素阵列中确定与当前拍摄场景的光线强度对应的目标像素;目标像素包括RGBW像素阵列中的W像素或至少一种彩色像素(420)。获取目标像素的相位信息,根据目标像素的相位信息计算相位差(440),基于相位差进行对焦控制(460)。

Description

对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质
本申请要求于2021年08月09日提交中国专利局,申请号为202110909146.8,发明名称为“对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质。
背景技术
随着电子设备的发展,越来越多的用户通过电子设备拍摄图像。为了保证拍摄的图像清晰,通常需要对电子设备的摄像模组进行对焦,即通过调节镜头与图像传感器之间的距离,以使拍摄对象在焦平面上。传统的对焦方式包括相位检测自动对焦(英文:phase detection auto focus;简称:PDAF)。
传统的相位检测自动对焦,主要是基于RGB像素阵列来计算相位差,然后再基于相位差来控制马达,进而由马达驱动镜头移动至合适的位置进行对焦,以使拍摄对象成像在焦平面上。
然而,由于RGB像素阵列在不同的光线强度下的感光度不同,因此,在部分光线强度下,通过RGB像素阵列所计算出的相位差的准确性较低,进而导致对焦的准确性也大幅降低。
发明内容
本申请实施例提供了一种对焦控制方法、装置、成像设备、电子设备、计算机可读存储介质,可以提高对焦控制的准确性。
一种对焦控制方法,应用于电子设备,所述电子设备包括图像传感器,所述图像传感器包括RGBW像素阵列,所述方法包括:
根据当前拍摄场景的光线强度,从所述RGBW像素阵列中确定与所述当前拍摄场景的光线强度对应的目标像素;所述目标像素包括所述RGBW像素阵列中的W像素或至少一种彩色像素;
获取所述目标像素的相位信息,根据所述目标像素的相位信息计算相位差;
基于所述相位差进行对焦控制。
一种成像设备,包括透镜、滤光片及图像传感器,所述透镜、滤光片及图像传感器依次位于入射光路上;
所述图像传感器包括阵列排布的多个RGBW像素阵列,每个所述RGBW像素阵列包括多个像素单元,在每个所述像素单元中包括呈对角线排列的W像素及呈另一对角线排列的彩色像素,且每个像素对应一个微透镜及多个感光元件;每个像素包括阵列排布的多个子像素,每个子像素对应一个感光元件;所述彩色像素包括R像素、G像素、B像素。
一种对焦控制装置,应用于电子设备,所述电子设备包括图像传感器,所述图像传感器包括RGBW像素阵列,所述装置包括:
目标像素确定模块,用于根据当前拍摄场景的光线强度,从所述RGBW像素阵列中确定与所述当前拍摄场景的光线强度对应的目标像素;所述目标像素包括所述RGBW像素阵列中的W像素或至少一种彩色像素;
相位差计算模块,用于获取所述目标像素的相位信息,根据所述目标像素的相位信息计算相位差;
对焦控制模块,用于基于所述相位差进行对焦控制。
一种电子设备,包括存储器及处理器,所述存储器中储存有计算得到机程序,所述计算得到机程序被所述处理器执行时,使得所述处理器执行如上所述的对焦控制方法的操作。
一种计算得到机可读存储介质,其上存储有计算得到机程序,所述计算得到机程序被处理器执行时 实现如上所述的对焦控制方法的操作。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为相位检测自动对焦的原理示意图;
图2为在图像传感器包括的像素点中成对地设置相位检测像素点的示意图;
图3为一个实施例中一个RGBW像素阵列的部分结构示意图;
图4为一个实施例中对焦控制方法的流程图;
图5为一个实施例中对焦控制方法的示意图;
图6为一个实施例中在基于相位差进行对焦控制之后,生成目标图像的方法的流程图;
图7为一个实施例中在基于相位差进行对焦控制之后,生成目标图像的方法的示意图;
图8为图4中获取目标像素所采集的相位信息,根据目标像素的相位信息计算相位差方法的流程图;
图9为另一个实施例中对焦控制方法的示意图;
图10为另一个实施例中在基于相位差进行对焦控制之后,生成目标图像的方法的流程图;
图11为另一个实施例中在基于相位差进行对焦控制之后,生成目标图像的方法的示意图;
图12为图4中获取目标像素所采集的相位信息,根据目标像素的相位信息计算相位差方法的流程图;
图13为再一个实施例中对焦控制方法的示意图;
图14为又一个实施例中RGBW像素阵列的示意图;
图15为再一个实施例中RGBW像素阵列的示意图;
图16为一个实施例中对焦控制装置的结构框图;
图17为一个实施例中电子设备的内部结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一客户端称为第二客户端,且类似地,可将第二客户端称为第一客户端。第一客户端和第二客户端两者都是客户端,但其不是同一客户端。
图1为相位检测自动对焦(phase detection auto focus,PDAF)的原理示意图。如图1所示,M1为成像设备处于合焦状态时,图像传感器所处的位置,其中,合焦状态指的是成功对焦的状态。当图像传感器位于M1位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在图像传感器上会聚,也即是,由物体W反射向镜头Lens的不同方向上的成像光线g在图像传感器上的同一位置处成像,此时,图像传感器成像清晰。
M2和M3为成像设备不处于合焦状态时,图像传感器所可能处于的位置,如图1所示,当图像传感器位于M2位置或M3位置时,由物体W反射向镜头Lens的不同方向上的成像光线g会在不同的位置成像。请参考图1,当图像传感器位于M2位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在位置A和位置B分别成像,当图像传感器位于M3位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在位置C和位置D分别成像,此时,图像传感器成像不清晰。
在PDAF技术中,可以获取从不同方向射入镜头的成像光线在图像传感器中所成的像在位置上的差异,例如,如图1所示,可以获取位置A和位置B的差异,或者,获取位置C和位置D的差异;在获取到从不同方向射入镜头的成像光线在图像传感器中所成的像在位置上的差异之后,可以根据该差异 以及摄像机中镜头与图像传感器之间的几何关系,得到离焦距离,所谓离焦距离指的是图像传感器当前所处的位置与合焦状态时图像传感器所应该处于的位置的距离;成像设备可以根据得到的离焦距离进行对焦。
由此可知,合焦时,计算得到的PD值为0,反之算出的值越大,表示离合焦点的位置越远,值越小,表示离合焦点越近。采用PDAF对焦时,通过计算出PD值,再根据标定得到PD值与离焦距离之间的对应关系,可以求得离焦距离,然后根据离焦距离控制镜头移动达到合焦点,以实现对焦。
相关技术中,可以在图像传感器包括的像素点中成对地设置一些相位检测像素点,如图2所示,图像传感器中可以设置有相位检测像素点对(以下称为像素点对)A,像素点对B和像素点对C。其中,在每个像素点对中,一个相位检测像素点进行左侧遮挡(英文:Left Shield),另一个相位检测像素点进行右侧遮挡(英文:Right Shield)。
对于进行了左侧遮挡的相位检测像素点而言,射向该相位检测像素点的成像光束中仅有右侧的光束才能在该相位检测像素点的感光部分(也即是未被遮挡的部分)上成像,对于进行了右侧遮挡的相位检测像素点而言,射向该相位检测像素点的成像光束中仅有左侧的光束才能在该相位检测像素点的感光部分(也即是未被遮挡的部分)上成像。这样,就可以将成像光束分为左右两个部分,通过对比左右两部分成像光束所成的像,即可得到相位差。
电子设备包括图像传感器,图像传感器包括阵列排布的多个RGBW像素阵列。图3所示为一个RGBW像素阵列的示意图。RGBW pattern(像素阵列)由于相比于一般的Bayer pattern(拜耳像素阵列)增加了通光量,提高了所采集信号的信噪比。每个RGBW像素阵列包括多个像素单元Z,如图3所示,每个RGBW像素阵列包括4个像素单元Z。其中,这4个像素单元Z分别为红色像素单元、绿色像素单元、绿色像素单元及红色像素单元。当然,在其他实施例中,每个RGBW像素阵列包括6个或8个像素单元Z,本申请对此不做限定。
在每个像素单元Z中包括呈对角线排列的W像素(白色像素)D及呈另一对角线排列的彩色像素D,且每个像素D对应一个微透镜及多个感光元件;每个像素包括阵列排布的多个子像素,每个子像素对应一个感光元件。其中,彩色像素D包括R像素(红色像素)、G像素(绿色像素)及B像素(蓝色像素)。具体的,针对红色像素单元,包括呈对角线排列的2个W像素及呈另一对角线排列的2个R像素;针对绿色像素单元,包括呈对角线排列的2个W像素及呈另一对角线排列的2个G像素;针对蓝色像素单元,包括呈对角线排列的2个W像素及呈另一对角线排列的2个B像素。
其中,每个W像素D包括阵列排布的多个子像素d,每个彩色像素D包括阵列排布的多个子像素d,且每个子像素d对应一个感光元件。其中,感光元件是一种能够将光信号转化为电信号的元件。例如,感光元件可为光电二极管。如图3所示,每个W像素D包括阵列排布的4个子像素d(即4个光电二极管),每个彩色像素D包括阵列排布的4个子像素d(即4个光电二极管)。例如,针对绿色像素D包括阵列排布的4个光电二极管(Up-Left PhotoDiode、Up-Right PhotoDiode、Down-Left PhotoDiode及Down-Right PhotoDiode)。
图4为一个实施例中对焦控制方法的流程图。本申请实施例中的对焦控制方法,以运行于具有拍摄功能的电子设备上为例进行描述。电子设备可以是手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、穿戴式设备(智能手环、智能手表、智能眼镜、智能手套、智能袜子、智能腰带等)、VR(virtual reality,虚拟现实)设备、智能家居、无人驾驶汽车等任意终端设备。电子设备包括图像传感器,图像传感器包括RGBW像素阵列,如图4所示,对焦控制方法包括操作420至操作460。
操作420,根据当前拍摄场景的光线强度,从RGBW像素阵列中确定与当前拍摄场景的光线强度对应的目标像素;目标像素包括RGBW像素阵列中的W像素或至少一种彩色像素。
在不同拍摄场景或不同时刻,当前拍摄场景的光线强度均不尽相同,而由于RGB像素阵列在不同的光线强度下的感光度不同,因此,在部分光线强度下,通过RGB像素阵列所计算出的相位差的准确性较低,进而导致对焦的准确性也大幅降低。其中,光线强度又称之为光照强度,光照强度是一种物理术语,指单位面积上所接受可见光的光通量,简称照度,单位勒克斯(Lux或lx)。光照强度用于指示光照的强弱和物体表面积被照明程度的量。下表为不同天气及位置下的光照强度值:
表1-1
天气及位置 光照强度值
晴天阳光直射地面 100000lx
晴天室内中央 200lx
阴天室外 50-500lx
阴天室内 5-50lx
月光(满月) 2500lx
晴朗月夜 0.2lx
黑夜 0.0011lx
从上述表1-1中可知,在拍摄场景或不同时刻,当前拍摄场景的光线强度相差较大。
为了解决这个问题,将传统方法中图像传感器的RGB像素阵列,替换为RGBW像素阵列。由于RGBW像素阵列相对于RGB像素阵列,在RGB三色Color Filter增加一个白色区域可以提高光线的透过率。因为W像素的感光度较强,那么,RGBW像素阵列相对于RGB像素阵列在光线强度较弱的场景下,就能够更加准确地计算出相位差,进而提高对焦的准确性。
具体地,根据当前拍摄场景的光线强度,从RGBW像素阵列的W像素或至少一种彩色像素中确定与当前拍摄场景的光线强度对应的目标像素。首先,获取当前拍摄场景的光线强度即光照强度,这里可以是通过电子设备上的传感器来获取当前拍摄场景的光线强度。然后,基于当前拍摄场景的光线强度与光线强度的预设阈值之间的大小关系,从RGBW像素阵列中确定与当前拍摄场景的光线强度对应的目标像素。例如,若当前拍摄场景的光线强度小于光线强度的预设阈值,则说明此时的光线较弱,那么确定W像素为目标像素,以通过W像素获取到更多的相位信息。若当前拍摄场景的光线强度大于或等于光线强度的预设阈值,则确定RGB像素中的至少一种为目标像素。因为此时通过RGB像素就可以获取到准确的相位信息,而W像素的感光度较强,反而W像素容易饱和进而影响所得到的相位信息的准确性。
操作440,获取目标像素的相位信息,根据目标像素的相位信息计算相位差。
在确定了目标像素之后,可以读取目标像素所包含的子像素所采集的相位信息。然后,可以分别计算目标像素中处于第一方向、第二方向、对角线(包括第一对角线方向,及与第一对角线方向垂直的第二对角线方向)这四个方向的子像素的相位信号的信号差,得到这四个方向的相位差。其中,第一方向为RGBW像素阵列的竖直方向,第二方向为RGBW像素阵列的水平方向,且第一方向与第二方向相互垂直。当然,还可以计算目标像素所包含的子像素在其他方向上的相位差,本申请对此不做限定。
操作460,基于相位差进行对焦控制。
在基于所计算的相位差进行对焦控制时,由于针对预览图像上某一方向的纹理特征,所采集到的平行于该方向的相位差几乎为0,显然不能基于所采集的平行于该方向的相位差进行对焦。因此,若当前拍摄场景对应的预览图像中包括第二方向的纹理特征,则基于第一方向的相位差进行对焦控制。例如,假设第一方向为RGBW像素阵列的竖直方向,第二方向为RGBW像素阵列的水平方向,且第一方向与第二方向相互垂直。那么,预览图像中包括第二方向的纹理特征,指的是预览图像中包括水平方向的条纹,可以是纯色的、水平方向的条纹。此时,当前拍摄场景对应的预览图像中包括水平方向的纹理特征,则基于竖直方向的相位差进行对焦控制。
若当前拍摄场景对应的预览图像中包括第一方向的纹理特征,则基于第二方向的相位差进行对焦控制。若当前拍摄场景对应的预览图像中包括第一对角线方向的纹理特征,则基于第二对角线方向的相位差进行对焦控制,反之同理。如此,针对不同方向的纹理特征,才能够准确地采集到相位差。
本申请实施例中的对焦控制方法,由于RGB像素阵列在不同的光线强度下的感光度不同,因此,在部分光线强度下,通过RGB像素阵列所计算出的相位差的准确性较低,进而导致对焦的准确性也大幅降低。在本申请中,根据当前拍摄场景的光线强度,从RGBW像素阵列的W像素或至少一种彩色像素中确定与当前拍摄场景的光线强度对应的目标像素。因此,就可以在不同的光线强度下,若基于RGBW像素阵列中至少一种彩色像素的相位信息所计算出的相位差的准确性较低时,就选择基于W像 素的相位信息计算相位差,最终提高相位对焦的准确性。同理,若基于RGBW像素阵列中W像素的相位信息所计算出的相位差的准确性较低时,就选择基于至少一种彩色像素的相位信息计算相位差,最终提高相位对焦的准确性。
在一个实施例中,操作420,根据当前拍摄场景的光线强度,从RGBW像素阵列中确定与当前拍摄场景的光线强度对应的目标像素,包括:
根据当前拍摄场景的光线强度与光线强度的预设阈值,从RGBW像素阵列中确定与当前拍摄场景的光线强度对应的目标像素。
其中,光线强度的预设阈值即为光照强度阈值,基于上述表1-1可以设定阴天室内与室外的光照强度值50lx为第一预设光线强度阈值(下文简称第一预设阈值)。当然,本申请中并不对第一预设阈值的具体数值进行限定。
若当前拍摄场景的光线强度小于或等于第一预设阈值,则说明此时的光线较弱,那么确定W像素为目标像素,以通过W像素获取到更多的相位信息。若当前拍摄场景的光线强度大于第一预设阈值,则确定RGB像素中的至少一种为目标像素。因为此时通过RGB像素就可以获取到准确的相位信息,而W像素的感光度较强,反而W像素容易饱和进而影响所得到的相位信息的准确性。
本申请实施例中,在光线较弱时,由于W像素的感光度较强,则采用W像素作为目标像素,然后通过W像素可以准确地计算相位差,进而进行对焦控制。反之,在光线较弱时,采用RGB像素中的至少一种作为目标像素,然后通过RGB像素中的至少一种可以准确地计算相位差,进而进行对焦控制。最终,实现了在不同的光线强度下,均可以实现准确地进行对焦控制。
在一个实施例中,根据当前拍摄场景的光线强度与预设光线强度阈值,从RGBW像素阵列中确定与当前拍摄场景的光线强度对应的目标像素,包括:
若当前拍摄场景的光线强度超过第一预设阈值时,则将RGBW像素阵列中的至少一种彩色像素作为目标像素。
本申请实施例中,若当前拍摄场景的光线强度大于或等于第一预设阈值,则确定RGB像素中的至少一种为目标像素。因为此时通过RGB像素就可以获取到准确的相位信息,而W像素的感光度较强,反而W像素容易饱和进而影响所得到的相位信息的准确性。
在一个实施例中,操作440,获取目标像素所采集的相位信息,根据目标像素的相位信息计算相位差,包括:
若当前拍摄场景的光线强度超过第二预设阈值,则获取目标像素中每个像素的子像素的相位信息;第二预设阈值大于第一预设阈值;
针对目标像素中两个色彩相同的像素,根据两个色彩相同的像素中的各对子像素的相位信息,计算目标像素的相位差;其中,两个色彩相同的像素沿像素阵列的对角线相邻,各对子像素分别位于两个色彩相同的像素中,且在各像素中的位置相同。
具体的,由于若当前拍摄场景的光线强度超过第一预设阈值时,则将RGBW像素阵列中的至少一种彩色像素作为目标像素。而第二预设阈值大于第一预设阈值,因此,那么若当前拍摄场景的光线强度超过第二预设阈值,则目标像素同样也是RGBW像素阵列中的至少一种彩色像素。
因此,若当前拍摄场景的光线强度超过第二预设阈值,则获取目标像素中每个像素的子像素的相位信息。即为获取RGBW像素阵列中的至少一种彩色像素中子像素的相位信息。再从目标像素中确定沿像素阵列的对角线相邻的两个色彩相同的像素。然后,针对目标像素中这两个色彩相同的像素,根据两个色彩相同的像素中的各对子像素的相位信息,计算目标像素的相位差。其中,各对子像素分别位于两个色彩相同的像素中,且在各像素中的位置相同。
假设当前拍摄场景的光线强度超过第二预设阈值,将RGBW像素阵列中的至少一种彩色像素作为目标像素。这里,可以是将R像素、G像素、B像素中的任何一种作为目标像素,例如,将R像素作为目标像素,或将G像素作为目标像素,或将B像素作为目标像素。也可以是将R像素、G像素、B 像素中的任何两种作为目标像素,例如,将RG像素作为目标像素,或将RB像素作为目标像素,或将GB像素作为目标像素。也可以是将R像素、G像素、B像素全部作为目标像素。本申请中并不对此进行限定。
下面以将R像素、G像素、B像素全部作为目标像素的情况进行举例说明。如图5所示,为一个实施例中对焦控制的示意图。在分别读取了R像素、G像素、B像素中每个子像素的相位信息之后,从R像素中确定两个色彩相同的像素,该两个色彩相同的像素沿像素阵列的对角线相邻。再从两个色彩相同的像素中确定各对子像素,该各对子像素分别位于两个色彩相同的像素中,且在各像素中的位置相同。将各对子像素的相位信息输入至ISP,通过ISP计算R像素的相位差。这里将RGBW像素阵列划分为第一像素单元(R像素单元)、第二像素单元(G像素单元)、第三像素单元(G像素单元)及第四像素单元(B像素单元)。例如,对第一像素单元中左上角的R像素的4个子像素按照从上到下、从左到右的方向编号为子像素1、子像素2、子像素3及子像素4。对第一像素单元中右下角的R像素的4个子像素按照从上到下、从左到右的方向编号为子像素5、子像素6、子像素7及子像素8。那么,根据R像素中处于同一位置的两个子像素的相位信息,计算R像素的第一相位差。即为根据R像素中子像素1及子像素5的相位信息,计算R像素的第二相位差;根据R像素中子像素2及子像素6的相位信息,计算R像素的相位差;根据R像素中子像素3及子像素7的相位信息,计算R像素的第三相位差;根据R像素中子像素4及子像素8的相位信息,计算R像素的第四相位差。最终,基于R像素的第一相位差、第二相位差、第三相位差及第四相位差就可以得到R像素的相位差。这里,具体可以是求加权平均值的方法来进行计算,本申请对此不做限定。
同理,针对每个像素单元中的沿像素阵列的对角线相邻的G像素也进行上述操作,分别得到G像素的相位差。针对每个像素单元中的沿像素阵列的对角线相邻的B像素也进行上述操作,分别得到B像素的相位差。
然后,就可以基于R像素、G像素、B像素的相位差,计算镜头距离清晰位置的距离,进而根据该距离计算马达驱动的code值,然后马达的Driver IC将code值转换为驱动电流,并基于驱动电流驱动镜头移动到清晰位置。至此,实现了对焦控制的过程。
本申请实施例中,若当前拍摄场景的光线强度超过第二预设阈值,则获取目标像素中每个像素的子像素的相位信息;第二预设阈值大于第一预设阈值。因为此时的光线强度比较大,则通过RGB像素的子像素所采集到的相位信息比较准确,所以针对目标像素中沿像素阵列的对角线相邻的色彩相同的像素,直接根据色彩相同的像素中各对子像素的相位信息,计算目标像素的相位差。再基于目标像素的相位差进行对焦,最终提高了相位对焦的准确性。
接上一个实施例,如图6所示,在基于相位差进行对焦控制之后,方法还包括:
操作620,控制RGBW像素阵列曝光,获取RGBW像素阵列中所有子像素的像素值。
在通过上一个实施例中的对焦控制方法实现了对焦之后,控制RGBW像素阵列曝光,获取RGBW像素阵列中所有子像素的像素值。即获取RGBW像素阵列中每个R像素、G像素、B像素及W像素的子像素的像素值。如图7所示,为一个实施例中生成目标图像的示意图。其中,获取RGBW像素阵列中子像素的像素值,构成了原始RAW图像702。
操作640,从子像素的像素值中获取彩色像素的子像素的像素值,对彩色像素的子像素的像素值进行插值运算,生成拜耳阵列图像。
从原始RAW图像702中获取R像素、G像素、B像素的子像素的像素值,生成RGB像素对应的RAW图像704。对RGB像素对应的RAW图像704中的R像素、G像素、B像素的子像素的像素值进行插值运算,生成拜耳阵列图像706。其中,拜耳阵列图像是一个4×4阵列,由8个绿色、4个蓝色和4个红色像素组成,在将灰度图像转换为彩色图片时会以2×2矩阵进行9次运算,最后生成一幅彩色图像。具体的,可以采用Remosaic插值算法来进行插值处理,其中Remosaic插值算法,主要是通过像素互换,或通过像素与周围相关像素之间的联系,根据像素与周围相关像素之间的距离远近计算出权重比例。然后,基于权重比例与该像素的像素值生成周围相关像素的像素值。
操作660,从子像素的像素值中获取W像素的子像素的像素值,对W像素的子像素的像素值进行插值运算,生成W像素图像。
从原始RAW图像702中获取W像素的子像素的像素值,生成W像素对应的RAW图像708。对W像素对应的RAW图像708中的W像素的子像素的像素值进行插值运算,生成W像素图像710。
操作680,将拜耳阵列图像与W像素图像进行融合,生成目标图像。
最后,将拜耳阵列图像706与W像素图像710进行融合,生成目标图像712。这里,在将拜耳阵列图像706与W像素图像710进行融合时,可以是直接将拜耳阵列图像706中每个子像素的像素值与W像素图像710中每个子像素的像素值进行合并,生成目标图像712中对应位置的子像素的像素值。
本申请实施例中,若当前拍摄场景的光线强度超过第二预设阈值,则获取目标像素中每个像素的子像素的相位信息。因为此时的光线强度比较大,则通过RGB像素的子像素所采集到的相位信息比较准确,所以针对目标像素中相邻的色彩相同的像素,直接根据色彩相同的像素中处于同一位置的两个子像素的相位信息,计算目标像素的相位差。再基于目标像素的相位差进行对焦,最终提高了相位对焦的准确性。
然后,控制RGBW像素阵列曝光,获取RGBW像素阵列中所有子像素的像素值。因为此时的光线强度比较大,则每个子像素的像素值的信噪比较大。因此,从子像素的像素值中获取彩色像素的子像素的像素值,直接对彩色像素的子像素的像素值进行插值运算,生成拜耳阵列图像。直接从子像素的像素值中获取W像素的子像素的像素值,对W像素的子像素的像素值进行插值运算,生成W像素图像。将拜耳阵列图像与W像素图像进行融合,生成目标图像。由于此时光线强度比较大,直接对每个子像素的像素值进行插值处理,可以在保证较大的信噪比的同时,提高最终所生成的目标图像的分辨率。
在一个实施例中,如图8所示,每个RGBW像素阵列包括多个像素单元,每个像素单元包括多个像素,每个像素包括多个子像素;操作440,获取目标像素所采集的相位信息,根据目标像素的相位信息计算相位差,包括:
操作820,若当前拍摄场景的光线强度超过第一预设阈值,且不大于第二预设阈值,获取目标像素中每个像素的子像素的相位信息。
具体的,由于若当前拍摄场景的光线强度超过第一预设阈值时,则将RGBW像素阵列中的至少一种彩色像素作为目标像素。因此,那么若当前拍摄场景的光线强度超过第一预设阈值,且不大于第二预设阈值,则目标像素同样也是RGBW像素阵列中的至少一种彩色像素。然后,获取目标像素中每个像素的子像素的相位信息。即为获取RGBW像素阵列中的至少一种彩色像素中子像素的相位信息。
假设当前拍摄场景的光线强度超过第一预设阈值,且不大于第二预设阈值,将RGBW像素阵列中的至少一种彩色像素作为目标像素。这里,可以是将R像素、G像素、B像素中的任何一种作为目标像素,例如,将R像素作为目标像素,或将G像素作为目标像素,或将B像素作为目标像素。也可以是将R像素、G像素、B像素中的任何两种作为目标像素,例如,将RG像素作为目标像素,或将RB像素作为目标像素,或将GB像素作为目标像素。也可以是将R像素、G像素、B像素全部作为目标像素。本申请中并不对此进行限定。
操作840,针对每个像素单元,将色彩相同的像素中在第一方向上处于色彩相同的像素内相同区域的子像素的相位信息进行合并,得到各像素单元内色彩相同的像素在第一方向上的合并相位信息,根据每个像素在第一方向上的合并相位信息计算第一方向的相位差;或,
结合图3所示,RGBW像素阵列包括4个像素单元。针对每个像素单元,首先,确定色彩相同的像素中在第一方向上处于色彩相同的像素内相同区域的子像素。其中,第一方向为RGBW像素阵列的竖直方向,第二方向为RGBW像素阵列的水平方向,且第一方向与第二方向相互垂直。当然,还可以计算目标像素所包含的子像素在其他方向上的相位差,本申请对此不做限定。
下面以将R像素作为目标像素的情况进行举例说明。
如图9所示,为一个实施例中对焦控制的示意图。针对RGBW像素阵列920中的R像素单元,首先,确定R像素在第一方向上处于R像素内相同区域的子像素。例如,对第一像素单元(R像素单元) 中左上角的R像素的4个子像素按照从上到下、从左到右的方向编号为子像素1、子像素2、子像素3及子像素4(可参考图5)。对第一像素单元(R像素单元)中右下角的R像素的4个子像素按照从上到下、从左到右的方向编号为子像素5、子像素6、子像素7及子像素8(可参考图5)。那么,确定左上角的R像素在第一方向上处于R像素内相同区域的子像素为子像素1、子像素3,确定右下角的R像素在第一方向上处于R像素内相同区域的子像素为子像素4、子像素6。
其次,将R像素单元中R像素在第一方向上处于同一区域的子像素的相位信息进行合并。即将子像素1、子像素3的相位信息(Left signal)进行合并,生成左侧相位信息;将子像素4、子像素6的相位信息(right signal)进行合并,生成右侧相位信息;最后,将左侧相位信息及右侧相位信息进行合并得到各R像素单元内R像素在第一方向上的合并相位信息,生成合并后的RGB像素阵列940。
根据每个像素在第一方向上的合并相位信息计算第一方向的相位差。例如,针对合并后的RGB像素阵列940,根据R像素单元中的两个R像素在第一方向上的合并相位信息,计算R像素在第一方向的相位差。
同理,若将G像素作为目标像素,则针对每个像素单元中的G像素也进行上述操作,根据各G像素单元中的两个G像素在第一方向上的合并相位信息,计算G像素在第一方向的相位差。
同理,若将B像素作为目标像素,则针对每个像素单元中的B像素也进行上述操作,根据各B像素单元中的两个B像素在第一方向上的合并相位信息,计算B像素在第一方向的相位差。
若将R像素、G像素、B像素中的任何两种作为目标像素,或将R像素、G像素、B像素全部作为目标像素。则从上述计算所得的R像素在第一方向的相位差、G像素在第一方向的相位差及B像素在第一方向的相位差中选取对应的相位差,进行合并生成第一方向的相位差。
操作860,针对每个像素单元,将色彩相同的像素中在第二方向上处于色彩相同的像素内相同区域的子像素的相位信息进行合并,得到各像素单元内色彩相同的像素在第二方向上的合并相位信息,根据每个像素在第二方向上的合并相位信息计算第二方向的相位差;第一方向与第二方向相互垂直。
下面以将R像素作为目标像素的情况进行举例说明。
针对RGBW像素阵列中的R像素单元,首先,确定R像素在第二方向上处于色彩相同的像素内相同区域的子像素。例如,对第一像素单元(R像素单元)中左上角的R像素的4个子像素按照从上到下、从左到右的方向编号为子像素1、子像素2、子像素3及子像素4。对第一像素单元(R像素单元)中右下角的R像素的4个子像素按照从上到下、从左到右的方向编号为子像素5、子像素6、子像素7及子像素8。那么,确定左上角的R像素在第二方向上处于R像素内相同区域的子像素为子像素1、子像素2,确定右下角的R像素在第二方向上处于R像素内相同区域的子像素为子像素4、子像素5。
其次,将R像素单元中R像素在第二方向上处于R像素内相同区域的子像素的相位信息进行合并。即将子像素1、子像素2的相位信息进行合并,生成上方相位信息;将子像素4、子像素5的相位信息进行合并,生成下方相位信息;最后,将上方相位信息及下方相位信息进行合并得到每个像素在第二方向上的合并相位信息,生成合并后的RGB像素阵列。
根据每个像素在第二方向上的合并相位信息计算第二方向的相位差。例如,针对合并后的RGB像素阵列940,根据R像素单元中的两个R像素在第二方向上的合并相位信息,计算R像素在第二方向的相位差。
同理,若将G像素作为目标像素,则针对每个像素单元中的G像素也进行上述操作,根据各G像素单元中的两个G像素在第二方向上的合并相位信息,计算G像素在第二方向的相位差。
同理,若将B像素作为目标像素,则针对每个像素单元中的B像素也进行上述操作,根据各B像素单元中的两个B像素在第二方向上的合并相位信息,计算B像素在第二方向的相位差。
若将R像素、G像素、B像素中的任何两种作为目标像素,或将R像素、G像素、B像素全部作为目标像素。则从上述计算所得的R像素在第二方向的相位差、G像素在第二方向的相位差及B像素在第二方向的相位差中选取对应的相位差,进行合并生成第二方向的相位差。
本申请实施例中,若当前拍摄场景的光线强度超过第一预设阈值,小于第二预设阈值,则获取目标像素中每个像素的子像素的相位信息。因为此时的光线强度稍弱,则通过RGB像素的子像素所采集到 的相位信息不是很准确,部分RGB像素可能未采集到相位信息。因此,针对每个像素单元,将色彩相同的像素中在第一方向/第二方向上处于色彩相同的像素内相同区域的子像素的相位信息进行合并,得到各像素单元内色彩相同的像素在第一方向/第二方向上的合并相位信息,根据每个像素在第一方向/第二方向上的合并相位信息计算第一方向/第二方向的相位差。通过将相位信息进行合并的方式,可以实现提高所获取的相位信息的准确性,提高相位信息的信噪比。再基于第一方向/第二方向的相位差进行对焦,最终提高了相位对焦的准确性。
在一个实施例中,如图10所示,在基于相位差进行对焦控制之后,方法还包括:
操作1010,控制RGBW像素阵列曝光,获取所RGBW像素阵列中子像素的像素值。
在通过上一个实施例中的对焦控制方法实现了对焦之后,控制RGBW像素阵列曝光,获取RGBW像素阵列中子像素的像素值。即获取RGBW像素阵列中每个R像素、G像素、B像素及W像素的子像素的像素值。如图11所示,为一个实施例中生成目标图像的示意图。其中,获取RGBW像素阵列中子像素的像素值,构成了原始RAW图像1102。
操作1030,根据各彩色像素的子像素的像素值,计算彩色像素的像素值。
从子像素的像素值中获取R像素、G像素及B像素的子像素的像素值,将R像素的子像素的像素值合并得到R像素的像素值,将G像素的子像素的像素值合并得到G像素的像素值,将B像素的子像素的像素值合并得到B像素的像素值。
如图11所示,从原始RAW图像1102中获取R像素、G像素、B像素的子像素的像素值,生成RGB像素对应的RAW图像1104。将R像素的子像素的像素值合并得到R像素的像素值,将G像素的子像素的像素值合并得到G像素的像素值,将B像素的子像素的像素值合并得到B像素的像素值。基于R像素的像素值、G像素的像素值及B像素的像素值,生成RGB像素对应的合并RAW图像1106。
操作1050,对彩色像素的像素值进行插值运算,生成拜耳阵列图像。
对RGB像素对应的合并RAW图像1106中的R像素、G像素、B像素的像素值进行插值运算,生成拜耳阵列图像1108。具体的,可以采用Remosaic插值算法来进行插值处理,其中Remosaic插值算法,主要是通过像素互换,或通过像素与周围相关像素之间的联系,根据像素与周围相关像素之间的距离远近计算出权重比例。然后,基于权重比例与该像素的像素值生成周围相关像素的像素值。
操作1070,根据W像素的子像素的像素值,计算W像素的像素值,对W像素的像素值进行插值运算,生成W像素图像。
从子像素的像素值中获取W像素的子像素的像素值,将W像素的子像素的像素值进行合并得到W像素的像素值。
从原始RAW图像1102中获取W像素的子像素的像素值,生成W像素对应的RAW图像1110。将W像素的子像素的像素值合并得到W像素的像素值,生成W像素对应的合并W图像1112。对W像素对应的合并RAW图像1212中的W像素的像素值进行插值运算,生成W像素图像1114。
操作1190,将拜耳阵列图像与W像素图像进行融合,生成目标图像。
将拜耳阵列图像1108与W像素图像1114进行融合,生成目标图像1116。这里,在将拜耳阵列图像1108与W像素图像1114进行融合时,可以是直接将拜耳阵列图像1108中每个像素的像素值与W像素图像1114中每个像素的像素值进行合并,生成目标图像1116中对应位置的像素的像素值。
本申请实施例中,若当前拍摄场景的光线强度超过第一预设阈值,且不大于第二预设阈值,在按照上述实施例中的对焦方法进行对焦之后,控制RGBW像素阵列曝光,获取所RGBW像素阵列中子像素的像素值。因为此时的光线强度稍弱,则每个子像素的像素值的信噪比较小。将各彩色像素的子像素的像素值进行合并,生成彩色像素的像素值,提高了彩色像素的像素值的信噪比。然后,再对彩色像素的像素值进行插值运算,生成拜耳阵列图像。将各W像素的子像素的像素值进行合并,生成W像素的像素值,提高了W像素的像素值的信噪比。再对W像素的像素值进行插值运算,生成W像素图像。将拜耳阵列图像与W像素图像进行融合,生成目标图像。
虽然最终生成的目标图像的分辨率降低了,但是所采集的像素值对应的信号增大了,因此提高了目 标图像的信噪比。
在一个实施例中,彩色像素包括R像素、G像素及B像素;根据各彩色像素的子像素的像素值,计算彩色像素的像素值,包括:
从子像素的像素值中获取R像素、G像素及B像素的子像素的像素值,将R像素的子像素的像素值合并得到R像素的像素值,将G像素的子像素的像素值合并得到G像素的像素值,将B像素的子像素的像素值合并得到B像素的像素值。
本申请实施例中,将R像素的子像素的像素值合并得到R像素的像素值时,可以是直接对R像素的子像素的像素值计算加权平均值,生成R像素的像素值。同理,计算出G像素的像素值、B像素的像素值。将各彩色像素的子像素的像素值进行合并,生成彩色像素的像素值,提高了彩色像素的像素值的信噪比。
在一个实施例中,根据W像素的子像素的像素值,计算W像素的像素值,包括:
从子像素的像素值中获取W像素的子像素的像素值,将W像素的子像素的像素值进行合并得到W像素的像素值。
本申请实施例中,将W像素的子像素的像素值进行合并得到W像素的像素值,可以是直接对W像素的子像素的像素值计算加权平均值,生成W像素的像素值。将各W像素的子像素的像素值进行合并,生成W像素的像素值,提高了W像素的像素值的信噪比。
在一个实施例中,根据当前拍摄场景的光线强度与光线强度的预设阈值,从RGBW像素阵列中确定与当前拍摄场景的光线强度对应的目标像素,包括:
若当前拍摄场景的光线强度小于或等于第一预设阈值,则将RGBW像素阵列中的W像素作为目标像素。
本申请实施例中,若当前拍摄场景的光线强度小于或等于第一预设阈值,则说明此时的光线较弱。由于W像素的感光度较强,那么确定W像素为目标像素,以通过W像素可以获取到更多的相位信息。
接上一个实施例,每个RGBW像素阵列包括多个像素单元,如图12所示,操作440,获取目标像素所采集的相位信息,根据目标像素的相位信息计算相位差,包括:
操作1220,针对W像素,获取W像素中的各子像素的相位信息。
如图13所示,为一个实施例中对焦控制的示意图。若当前拍摄场景的光线强度小于或等于第一预设阈值,因为此时的光线强度非常弱,则将RGBW像素阵列1320中的W像素作为目标像素。进而,获取RGBW像素阵列中的W像素中子像素的相位信息,生成W像素阵列1320。
操作1240,针对每个像素单元,将W像素中在第一方向上处于同一区域的子像素的相位信息进行合并,得到W像素在第一方向上的合并相位信息,根据W像素在第一方向上的合并相位信息计算第一方向的相位差;或,
针对W像素阵列1320中的W像素单元,首先,确定W像素在第一方向上处于同一区域的子像素。例如,对第一像素单元(R像素单元)中右上角的W像素的4个子像素按照从上到下、从左到右的方向编号为子像素1、子像素2、子像素3及子像素4。对第一像素单元(R像素单元)中左下角的W像素的4个子像素按照从上到下、从左到右的方向编号为子像素5、子像素6、子像素7及子像素8。那么,确定右上角的W像素在第一方向上处于同一区域的子像素为子像素1、子像素3,确定左下角的W像素在第一方向上处于同一区域的子像素为子像素4、子像素6。
其次,将R像素单元中W像素在第一方向上处于同一区域的子像素的相位信息进行合并。即将子像素1、子像素3的相位信息(Left signal)进行合并,生成左侧相位信息;将子像素4、子像素6的相位信息(right signal)进行合并,生成右侧相位信息;最后,将左侧相位信息及右侧相位信息进行合并得到每个像素在第一方向上的合并相位信息,生成合并后的W像素阵列1340。
根据每个像素在第一方向上的合并相位信息计算第一方向的相位差。例如,针对合并后的W像素阵列1340,根据R像素单元中的两个W像素在第一方向上的合并相位信息,计算W像素在第一方向的相位差。
或者,可以继续对合并后的W像素阵列1340中W像素的合并相位信息再次进行合并,生成W像素阵列1360。根据W像素阵列1360,计算W像素在第一方向的相位差。
操作1260,针对每个像素单元,将W像素中在第二方向上处于同一区域的子像素的相位信息进行合并,得到W像素在第二方向上的合并相位信息,根据W像素在第二方向上的合并相位信息计算第二方向的相位差;第一方向与第二方向相互垂直。
针对W像素阵列1320中的W像素单元,首先,确定W像素在第二方向上处于同一区域的子像素。例如,对第一像素单元(R像素单元)中右上角的W像素的4个子像素按照从上到下、从左到右的方向编号为子像素1、子像素2、子像素3及子像素4。对第一像素单元(R像素单元)中左下角的W像素的4个子像素按照从上到下、从左到右的方向编号为子像素5、子像素6、子像素7及子像素8。那么,确定右上角的W像素在第二方向上处于同一区域的子像素为子像素1、子像素2,确定左下角的W像素在第二方向上处于同一区域的子像素为子像素4、子像素5。
其次,将R像素单元中W像素在第二方向上处于同一区域的子像素的相位信息进行合并。即将子像素1、子像素2的相位信息(Left signal)进行合并,生成上方相位信息;将子像素4、子像素5的相位信息(right signal)进行合并,生成下方相位信息;最后,将上方相位信息及下方相位信息进行合并得到每个像素在第二方向上的合并相位信息,生成合并后的W像素阵列。
根据每个像素在第二方向上的合并相位信息计算第二方向的相位差。例如,针对合并后的W像素阵列1340,根据R像素单元中的两个W像素在第二方向上的合并相位信息,计算W像素在第二方向的相位差。
或者,可以继续对合并后的W像素阵列1340中W像素的合并相位信息再次进行合并,生成W像素阵列1360。根据W像素阵列1360,计算W像素在第一方向的相位差。
本申请实施例中,若当前拍摄场景的光线强度小于或等于第一预设阈值,则将RGBW像素阵列中的W像素作为目标像素。因为此时的光线强度非常弱,则将RGBW像素阵列中的W像素作为目标像素。针对每个像素单元,将W像素中在第一方向/第二方向上处于同一区域的子像素的相位信息进行合并,得到W像素在第一方向/第二方向上的合并相位信息,根据W像素在第一方向/第二方向上的合并相位信息计算第一方向/第二方向的相位差,第一方向与第二方向相互垂直。通过将相位信息进行合并的方式,可以实现提高所获取的相位信息的准确性,提高相位信息的信噪比。再基于第一方向/第二方向的相位差进行对焦,最终提高了相位对焦的准确性。
在一个实施例中,像素对应的多个感光元件呈中心对称方式排布。
结合图3所示,为一个实施例中图像传感器的一部分的结构示意图。图像传感器包括阵列排布的多个RGBW像素阵列。图3所示为一个RGBW像素阵列的示意图。每个RGBW像素阵列包括多个像素单元Z,如图3所示,每个RGBW像素阵列包括4个像素单元Z。其中,这4个像素单元Z分别为红色像素单元、绿色像素单元、绿色像素单元及红色像素单元。
在每个像素单元Z中包括呈对角线排列的W像素D及彩色像素D,且每个像素D对应一个微透镜。其中,彩色像素D包括R像素、G像素及B像素。具体的,针对红色像素单元,包括呈对角线排列的2个W像素及2个R像素;针对绿色像素单元,包括呈对角线排列的2个W像素及2个G像素;针对蓝色像素单元,包括呈对角线排列的2个W像素及2个B像素。
其中,每个W像素D包括阵列排布的多个子像素d,每个彩色像素D包括阵列排布的多个子像素d,且每个子像素d对应一个感光元件。由于像素对应的多个感光元件呈中心对称方式排布,因此,W像素、R像素、G像素及B像素包括呈中心对称方式排布的多个子像素。即这些子像素对应的感光元件可以以各种排布方式、或各种形状进行中心对称排布,不限定于图3所示的以正方形来进行排布。
本申请实施例中,子像素对应的感光元件可以以各种排布方式、或各种形状进行中心对称排布,每 个子像素d对应一个感光元件。因此,W像素、R像素、G像素及B像素包括呈中心对称方式排布的多个子像素。为子像素提供了多样化的排布方式,因此,子像素能够采集到多样化的相位信息,进而提高后续对焦的准确性。
在一个实施例中,像素对应的多个感光元件以梯形方式进行中心对称排布。
如图14所示,为一个RGBW像素阵列的示意图。每个RGBW像素阵列包括4个像素单元Z。其中,这4个像素单元Z分别为红色像素单元、绿色像素单元、绿色像素单元及红色像素单元。在每个像素单元Z中包括呈对角线排列的W像素D及彩色像素D,且每个像素D对应一个微透镜。其中,彩色像素D包括R像素、G像素及B像素。
每个W像素包括阵列排布的多个子像素d,这些子像素以梯形方式进行中心对称排布。同理,每个R像素包括阵列排布的多个子像素d,这些子像素以梯形方式进行中心对称排布。每个G像素包括阵列排布的多个子像素d,这些子像素以梯形方式进行中心对称排布。每个B像素包括阵列排布的多个子像素d,这些子像素以梯形方式进行中心对称排布。且每个子像素d对应一个感光元件。感光元件可以为光电二极管(PD,PhotoDiode)。如图14中,左侧PD与右侧PD均为梯形结构,且左侧PD与右侧PD呈中心对称排布。
可选的,RGBW像素阵列中的W像素、R像素、G像素及B像素还可以采用多种不同排布方式进行组合,本申请对此不做具体限定。
本申请实施例中,子像素对应的感光元件可以以各种排布方式、或各种形状进行中心对称排布,每个子像素d对应一个感光元件。因此,W像素、R像素、G像素及B像素包括以梯形方式进行中心对称排布的多个子像素。为子像素提供了多样化的排布方式,因此,子像素能够采集到多样化的相位信息,进而提高后续对焦的准确性。
在一个实施例中,像素对应的多个感光元件以L形方式进行中心对称排布。
如图15所示,为一个RGBW像素阵列的示意图。每个RGBW像素阵列包括4个像素单元Z。其中,这4个像素单元Z分别为红色像素单元、绿色像素单元、绿色像素单元及红色像素单元。在每个像素单元Z中包括呈对角线排列的W像素D及彩色像素D,且每个像素D对应一个微透镜。其中,彩色像素D包括R像素、G像素及B像素。
每个W像素包括阵列排布的多个子像素d,这些子像素以L形方式进行中心对称排布。同理,每个R像素包括阵列排布的多个子像素d,这些子像素以L形方式进行中心对称排布。每个G像素包括阵列排布的多个子像素d,这些子像素以L形方式进行中心对称排布。每个B像素包括阵列排布的多个子像素d,这些子像素以L形方式进行中心对称排布。且每个子像素d对应一个感光元件。感光元件可以为光电二极管(PD,PhotoDiode)。如图15中,左侧PD与右侧PD均为L形结构,且左侧PD与右侧PD呈中心对称排布。
可选的,RGBW像素阵列中的W像素、R像素、G像素及B像素还可以采用多种不同排布方式进行组合,本申请对此不做具体限定。
本申请实施例中,子像素对应的感光元件可以以各种排布方式、或各种形状进行中心对称排布,每个子像素d对应一个感光元件。因此,W像素、R像素、G像素及B像素包括以L形方式进行中心对称排布的多个子像素。为子像素提供了多样化的排布方式,因此,子像素能够采集到多样化的相位信息,进而提高后续对焦的准确性。
在一个实施例中,如图16所示,提供了一种对焦控制装置1600,应用于电子设备,电子设备包括图像传感器,图像传感器包括RGBW像素阵列,装置包括:
目标像素确定模块1620,用于根据当前拍摄场景的光线强度,从RGBW像素阵列中确定与当前拍摄场景的光线强度对应的目标像素;目标像素包括RGBW像素阵列中的W像素或至少一种彩色像素;
相位差计算模块1640,用于获取目标像素的相位信息,根据目标像素的相位信息计算相位差;
对焦控制模块1660,用于基于相位差进行对焦控制。
在一个实施例中,目标像素确定模块1620,还用于根据当前拍摄场景的光线强度与光线强度的预设阈值,从RGBW像素阵列中确定与当前拍摄场景的光线强度对应的目标像素。
在一个实施例中,目标像素确定模块1620,包括:
第一目标像素确定单元,用于若当前拍摄场景的光线强度超过第一预设阈值时,则将RGBW像素阵列中的至少一种彩色像素作为目标像素。
在一个实施例中,相位差计算模块1640,用于若当前拍摄场景的光线强度超过第二预设阈值,则获取目标像素中每个像素的子像素的相位信息;第二预设阈值大于第一预设阈值;针对目标像素中两个色彩相同的像素,根据两个色彩相同的像素中的各对子像素的相位信息,计算目标像素的相位差;其中,两个色彩相同的像素沿像素阵列的对角线相邻,各对子像素分别位于两个色彩相同的像素中,且在各像素中的位置相同。
在一个实施例中,提供了一种对焦控制装置,该装置还包括:
第一目标图像生成模块,用于控制RGBW像素阵列曝光,获取RGBW像素阵列中子像素的像素值;从子像素的像素值中获取彩色像素的子像素的像素值,对彩色像素的子像素的像素值进行插值运算,生成拜耳阵列图像;从子像素的像素值中获取W像素的子像素的像素值,对W像素的子像素的像素值进行插值运算,生成W像素图像;将拜耳阵列图像与W像素图像进行融合,生成目标图像。
在一个实施例中,每个RGBW像素阵列包括多个像素单元,每个像素单元包括多个像素,每个像素包括多个子像素;相位差计算模块1640包括:
第一相位差计算单元,用于:若当前拍摄场景的光线强度超过第一预设阈值,且不大于第二预设阈值,获取目标像素中每个像素的子像素的相位信息;
针对每个像素单元,将色彩相同的像素中在第一方向上处于色彩相同的像素内相同区域的子像素的相位信息进行合并,得到各像素单元内色彩相同的像素在第一方向上的合并相位信息,根据每个像素在第一方向上的合并相位信息计算第一方向的相位差;或,
针对每个像素单元,将色彩相同的像素中在第二方向上处于色彩相同的像素内相同区域的子像素的相位信息进行合并,得到各像素单元内色彩相同的像素在第二方向上的合并相位信息,根据每个像素在第二方向上的合并相位信息计算第二方向的相位差;第一方向与第二方向相互垂直。
在一个实施例中,对焦控制模块1660,还用于若当前拍摄场景对应的预览图像中包括第二方向的纹理特征,则基于第一方向的相位差进行对焦控制;或,若当前拍摄场景对应的预览图像中包括第一方向的纹理特征,则基于第二方向的相位差进行对焦控制。
在一个实施例中,提供了一种对焦控制装置,该装置还包括:
第二目标图像生成模块,用于控制RGBW像素阵列曝光,获取所RGBW像素阵列中子像素的像素值;
根据各彩色像素的子像素的像素值,计算彩色像素的像素值;
对彩色像素的像素值进行插值运算,生成拜耳阵列图像;
根据W像素的子像素的像素值,计算W像素的像素值,对W像素的像素值进行插值运算,生成W像素图像;
将拜耳阵列图像与W像素图像进行融合,生成目标图像。
在一个实施例中,彩色像素包括R像素、G像素及B像素;第二目标图像生成模块,还用于从子像素的像素值中获取R像素、G像素及B像素的子像素的像素值,将R像素的子像素的像素值合并得到R像素的像素值,将G像素的子像素的像素值合并得到G像素的像素值,将B像素的子像素的像素值合并得到B像素的像素值。
在一个实施例中,第二目标图像生成模块,还用于从子像素的像素值中获取W像素的子像素的像素值,将W像素的子像素的像素值进行合并得到W像素的像素值。
在一个实施例中,目标像素确定模块1620,包括:
第二目标像素确定单元,用于若当前拍摄场景的光线强度小于或等于第一预设阈值,则将RGBW像素阵列中的W像素作为目标像素。
在一个实施例中,相位差计算模块1640包括:
第二相位差计算单元,用于针对W像素,获取W像素中的各子像素的相位信息;
针对每个像素单元,将W像素中在第一方向上处于同一区域的子像素的相位信息进行合并,得到W像素在第一方向上的合并相位信息,根据W像素在第一方向上的合并相位信息计算第一方向的相位差;或,
针对每个像素单元,将W像素中在第二方向上处于同一区域的子像素的相位信息进行合并,得到W像素在第二方向上的合并相位信息,根据W像素在第二方向上的合并相位信息计算第二方向的相位差;第一方向与第二方向相互垂直。
在一个实施例中,图像传感器包括阵列排布的多个RGBW像素阵列,每个RGBW像素阵列包括多个像素单元,在每个像素单元中包括呈对角线排列的W像素及呈另一对角线排列的彩色像素,且每个像素对应一个微透镜及多个感光元件;每个像素包括阵列排布的多个子像素,每个子像素对应一个感光元件;彩色像素包括R像素、G像素、B像素。
在一个实施例中,提供了一种成像设备,包括透镜、滤光片及图像传感器,其特征在于,透镜、滤光片及图像传感器依次位于入射光路上;
图像传感器包括阵列排布的多个RGBW像素阵列,每个RGBW像素阵列包括多个像素单元,在每个像素单元中包括呈对角线排列的W像素及呈另一对角线排列的彩色像素,且每个像素对应一个微透镜及多个感光元件;每个像素包括阵列排布的多个子像素,每个子像素对应一个感光元件;彩色像素包括R像素、G像素、B像素。
在一个实施例中,像素对应的多个感光元件呈中心对称方式排布。
应该理解的是,虽然上述流程图中的各个操作按照箭头的指示依次显示,但是这些操作并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些操作的执行并没有严格的顺序限制,这些操作可以以其它的顺序执行。而且,上述流程图中的至少一部分操作可以包括多个子操作或者多个阶段,这些子操作或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子操作或者阶段的执行顺序也不必然是依次进行,而是可以与其它操作或者其它操作的子操作或者阶段的至少一部分轮流或者交替地执行。
上述对焦控制装置中各个模块的划分仅仅用于举例说明,在其他实施例中,可将对焦控制装置按照需要划分为不同的模块,以完成上述对焦控制装置的全部或部分功能。
关于对焦控制装置的具体限定可以参见上文中对于对焦控制方法的限定,在此不再赘述。上述对焦控制装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
图17为一个实施例中电子设备的内部结构示意图。该电子设备可以是手机、平板电脑、笔记本电脑、台式电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point ofSales,销售终端)、车载电脑、穿戴式设备等任意终端设备。该电子设备包括通过系统总线连接的处理器和存储器。其中,该处理器可以包括一个或多个处理单元。处理器可为CPU(Central Processing Unit,中央处理单元)或DSP(Digital Signal Processing,数字信号处理器)等。存储器可包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种对焦控制X方法。内存储器为非易失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。
本申请实施例中提供的对焦控制装置中的各个模块的实现可为计算机程序的形式。该计算机程序可在终端或服务器上运行。该计算机程序构成的程序模块可存储在电子设备的存储器上。该计算机程序被处理器执行时,实现本申请实施例中所描述方法的操作。
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计 算机可读存储介质,当计算机可执行指令被一个或多个处理器执行时,使得处理器执行对焦控制方法的操作。
本申请实施例还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行对焦控制方法。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。非易失性存储器可包括ROM(Read-Only Memory,只读存储器)、PROM(Programmable Read-only Memory,可编程只读存储器)、EPROM(Erasable Programmable Read-Only Memory,可擦除可编程只读存储器)、EEPROM(Electrically Erasable Programmable Read-only Memory,电可擦除可编程只读存储器)或闪存。易失性存储器可包括RAM(Random Access Memory,随机存取存储器),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如SRAM(Static Random Access Memory,静态随机存取存储器)、DRAM(Dynamic Random Access Memory,动态随机存取存储器)、SDRAM(Synchronous Dynamic Random Access Memory,同步动态随机存取存储器)、双数据率DDR SDRAM(Double Data Rate Synchronous Dynamic Random Access memory,双数据率同步动态随机存取存储器)、ESDRAM(Enhanced Synchronous Dynamic Random Access memory,增强型同步动态随机存取存储器)、SLDRAM(Sync Link Dynamic Random Access Memory,同步链路动态随机存取存储器)、RDRAM(Rambus Dynamic Random Access Memory,总线式动态随机存储器)、DRDRAM(Direct Rambus Dynamic Random Access Memory,接口动态随机存储器)。以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (18)

  1. 一种对焦控制方法,其特征在于,应用于电子设备,所述电子设备包括图像传感器,所述图像传感器包括RGBW像素阵列,所述方法包括:
    根据当前拍摄场景的光线强度,从所述RGBW像素阵列中确定与所述当前拍摄场景的光线强度对应的目标像素;所述目标像素包括所述RGBW像素阵列中的W像素或至少一种彩色像素;
    获取所述目标像素的相位信息,根据所述目标像素的相位信息计算相位差;
    基于所述相位差进行对焦控制。
  2. 根据权利要求1所述的方法,其特征在于,所述根据当前拍摄场景的光线强度,从所述RGBW像素阵列中确定与所述当前拍摄场景的光线强度对应的目标像素,包括:
    根据当前拍摄场景的光线强度与光线强度的预设阈值,从所述RGBW像素阵列中确定与所述当前拍摄场景的光线强度对应的目标像素。
  3. 根据权利要求2所述的方法,其特征在于,所述根据当前拍摄场景的光线强度与光线强度的预设阈值,从所述RGBW像素阵列中确定与所述当前拍摄场景的光线强度对应的目标像素,包括:
    若所述当前拍摄场景的光线强度超过第一预设阈值时,则将所述RGBW像素阵列中的至少一种彩色像素作为目标像素。
  4. 根据权利要求3所述的方法,其特征在于,所述获取所述目标像素所采集的相位信息,根据所述目标像素的相位信息计算相位差,包括:
    若所述当前拍摄场景的光线强度超过第二预设阈值,则获取所述目标像素中每个像素的子像素的相位信息;所述第二预设阈值大于所述第一预设阈值;
    针对所述目标像素中两个色彩相同的像素,根据所述两个色彩相同的像素中的各对子像素的相位信息,计算所述目标像素的相位差;其中,所述两个色彩相同的像素沿像素阵列的对角线相邻,所述各对子像素分别位于所述两个色彩相同的像素中,且在各所述像素中的位置相同。
  5. 根据权利要求4所述的方法,其特征在于,在所述基于所述相位差进行对焦控制之后,所述方法还包括:
    控制所述RGBW像素阵列曝光,获取所述RGBW像素阵列中所有所述子像素的像素值;
    从所述子像素的像素值中获取所述彩色像素的子像素的像素值,对所述彩色像素的子像素的像素值进行插值运算,生成拜耳阵列图像;
    从所述子像素的像素值中获取W像素的子像素的像素值,对所述W像素的子像素的像素值进行插值运算,生成W像素图像;
    将所述拜耳阵列图像与所述W像素图像进行融合,生成目标图像。
  6. 根据权利要求3所述的方法,其特征在于,每个所述RGBW像素阵列包括多个像素单元,每个所述像素单元包括多个像素,每个所述像素包括多个子像素;所述获取所述目标像素所采集的相位信息,根据所述目标像素的相位信息计算相位差,包括:
    若所述当前拍摄场景的光线强度超过所述第一预设阈值,且不大于所述第二预设阈值,获取所述目标像素中每个像素的子像素的相位信息;
    针对每个所述像素单元,将所述色彩相同的像素中在第一方向上处于所述色彩相同的像素内相同区域的子像素的相位信息进行合并,得到各所述像素单元内所述色彩相同的像素在第一方向上的合并相位信息,根据每个所述像素在第一方向上的合并相位信息计算所述第一方向的相位差;或,
    针对每个所述像素单元,将所述色彩相同的像素中在第二方向上处于所述色彩相同的像素内相同区域的子像素的相位信息进行合并,得到各所述像素单元内所述色彩相同的像素在第二方向上的合并相位信息,根据每个所述像素在第二方向上的合并相位信息计算所述第二方向的相位差;所述第一方向与所述第二方向相互垂直。
  7. 根据权利要求6所述的方法,其特征在于,所述基于所述相位差进行对焦控制,包括:
    若所述当前拍摄场景对应的预览图像中包括所述第二方向的纹理特征,则基于所述第一方向的相位差进行对焦控制;或,
    若所述当前拍摄场景对应的预览图像中包括所述第一方向的纹理特征,则基于所述第二方向的相位差进行对焦控制。
  8. 根据权利要求7所述的方法,其特征在于,在所述基于所述相位差进行对焦控制之后,所述方法还包括:
    控制所述RGBW像素阵列曝光,获取所RGBW像素阵列中所述子像素的像素值;
    根据各所述彩色像素的子像素的像素值,计算所述彩色像素的像素值;
    对所述彩色像素的像素值进行插值运算,生成拜耳阵列图像;
    根据所述W像素的子像素的像素值,计算所述W像素的像素值,对所述W像素的像素值进行插值运算,生成W像素图像;
    将所述拜耳阵列图像与所述W像素图像进行融合,生成目标图像。
  9. 根据权利要求8所述的方法,其特征在于,所述彩色像素包括R像素、G像素及B像素;所述根据各所述彩色像素的子像素的像素值,计算所述彩色像素的像素值,包括:
    从所述子像素的像素值中获取R像素、G像素及B像素的子像素的像素值,将所述R像素的子像素的像素值合并得到所述R像素的像素值,将所述G像素的子像素的像素值合并得到所述G像素的像素值,将所述B像素的子像素的像素值合并得到所述B像素的像素值。
  10. 根据权利要求8所述的方法,其特征在于,所述根据所述W像素的子像素的像素值,计算所述W像素的像素值,包括:
    从所述子像素的像素值中获取W像素的子像素的像素值,将所述W像素的子像素的像素值进行合并得到所述W像素的像素值。
  11. 根据权利要求2所述的方法,其特征在于,所述根据当前拍摄场景的光线强度与光线强度的预设阈值,从所述RGBW像素阵列中确定与所述当前拍摄场景的光线强度对应的目标像素,包括:
    若所述当前拍摄场景的光线强度小于或等于第一预设阈值,则将所述RGBW像素阵列中的W像素作为目标像素。
  12. 根据权利要求11所述的方法,其特征在于,每个所述RGBW像素阵列包括多个像素单元,所述获取所述目标像素所采集的相位信息,根据所述目标像素的相位信息计算相位差包括:
    针对所述W像素,获取所述W像素中的各子像素的相位信息;
    针对每个所述像素单元,将所述W像素中在第一方向上处于同一区域的子像素的相位信息进行合并,得到所述W像素在第一方向上的合并相位信息,根据所述W像素在第一方向上的合并相位信息计算所述第一方向的相位差;或,
    针对每个所述像素单元,将所述W像素中在第二方向上处于同一区域的子像素的相位信息进行合并,得到所述W像素在第二方向上的合并相位信息,根据所述W像素在第二方向上的合并相位信息计算所述第二方向的相位差;所述第一方向与所述第二方向相互垂直。
  13. 根据权利要求1所述的方法,其特征在于,所述图像传感器包括阵列排布的多个RGBW像素阵列,每个所述RGBW像素阵列包括多个像素单元,在每个所述像素单元中包括呈对角线排列的W像素及呈另一对角线排列的彩色像素,且每个像素对应一个微透镜及多个感光元件;每个像素包括阵列排布的多个子像素,每个子像素对应一个感光元件;所述彩色像素包括R像素、G像素、B像素。
  14. 一种成像设备,包括透镜、滤光片及图像传感器,其特征在于,所述透镜、滤光片及图像传感器依次位于入射光路上;
    所述图像传感器包括阵列排布的多个RGBW像素阵列,每个所述RGBW像素阵列包括多个像素单元,在每个所述像素单元中包括呈对角线排列的W像素及呈另一对角线排列的彩色像素,且每个像素对应一个微透镜及多个感光元件;每个像素包括阵列排布的多个子像素,每个子像素对应一个感光元件;所述彩色像素包括R像素、G像素、B像素。
  15. 根据权利要求14所述的成像设备,其特征在于,所述像素对应的多个感光元件呈中心对称方式排布。
  16. 一种对焦控制装置,其特征在于,应用于电子设备,所述电子设备包括图像传感器,所述图 像传感器包括RGBW像素阵列,所述装置包括:
    目标像素确定模块,用于根据当前拍摄场景的光线强度,从所述RGBW像素阵列中确定与所述当前拍摄场景的光线强度对应的目标像素;所述目标像素包括所述RGBW像素阵列中的W像素或至少一种彩色像素;
    相位差计算模块,用于获取所述目标像素的相位信息,根据所述目标像素的相位信息计算相位差;
    对焦控制模块,用于基于所述相位差进行对焦控制。
  17. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器执行如权利要求1至13中任一项所述的对焦控制方法的操作。
  18. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至13中任一项所述的方法的操作。
PCT/CN2022/103859 2021-08-09 2022-07-05 对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质 WO2023016144A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110909146.8A CN113660415A (zh) 2021-08-09 2021-08-09 对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质
CN202110909146.8 2021-08-09

Publications (1)

Publication Number Publication Date
WO2023016144A1 true WO2023016144A1 (zh) 2023-02-16

Family

ID=78478635

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/103859 WO2023016144A1 (zh) 2021-08-09 2022-07-05 对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN113660415A (zh)
WO (1) WO2023016144A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113660415A (zh) * 2021-08-09 2021-11-16 Oppo广东移动通信有限公司 对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质
CN113891006A (zh) * 2021-11-22 2022-01-04 Oppo广东移动通信有限公司 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质
CN114222047A (zh) * 2021-12-27 2022-03-22 Oppo广东移动通信有限公司 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013055623A (ja) * 2011-09-06 2013-03-21 Sony Corp 画像処理装置、および画像処理方法、情報記録媒体、並びにプログラム
CN105210369A (zh) * 2013-04-17 2015-12-30 法国甫托尼公司 用于获取双模态图像的设备
CN105611125A (zh) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 成像方法、成像装置及电子装置
CN110087065A (zh) * 2019-04-30 2019-08-02 德淮半导体有限公司 半导体装置及其制造方法
CN110996077A (zh) * 2019-11-25 2020-04-10 Oppo广东移动通信有限公司 图像传感器、摄像头组件和移动终端
CN112235494A (zh) * 2020-10-15 2021-01-15 Oppo广东移动通信有限公司 图像传感器、控制方法、成像装置、终端及可读存储介质
CN113660415A (zh) * 2021-08-09 2021-11-16 Oppo广东移动通信有限公司 对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质
CN113891006A (zh) * 2021-11-22 2022-01-04 Oppo广东移动通信有限公司 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111741277B (zh) * 2020-07-13 2022-04-29 深圳市汇顶科技股份有限公司 图像处理的方法和图像处理装置

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013055623A (ja) * 2011-09-06 2013-03-21 Sony Corp 画像処理装置、および画像処理方法、情報記録媒体、並びにプログラム
CN105210369A (zh) * 2013-04-17 2015-12-30 法国甫托尼公司 用于获取双模态图像的设备
CN105611125A (zh) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 成像方法、成像装置及电子装置
CN110087065A (zh) * 2019-04-30 2019-08-02 德淮半导体有限公司 半导体装置及其制造方法
CN110996077A (zh) * 2019-11-25 2020-04-10 Oppo广东移动通信有限公司 图像传感器、摄像头组件和移动终端
CN112235494A (zh) * 2020-10-15 2021-01-15 Oppo广东移动通信有限公司 图像传感器、控制方法、成像装置、终端及可读存储介质
CN113660415A (zh) * 2021-08-09 2021-11-16 Oppo广东移动通信有限公司 对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质
CN113891006A (zh) * 2021-11-22 2022-01-04 Oppo广东移动通信有限公司 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质

Also Published As

Publication number Publication date
CN113660415A (zh) 2021-11-16

Similar Documents

Publication Publication Date Title
WO2023016144A1 (zh) 对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质
US20230362344A1 (en) System and Methods for Calibration of an Array Camera
US10044926B2 (en) Optimized phase detection autofocus (PDAF) processing
JP6878604B2 (ja) 撮像方法および電子装置
CN108141571B (zh) 无掩模相位检测自动聚焦
WO2018196549A1 (en) Dual-core focusing image sensor, focusing control method for the same, and electronic device
US20190281226A1 (en) Image sensor including phase detection pixels and image pickup device
US10567636B2 (en) Resolution enhancement using sensor with plural photodiodes per microlens
KR102624107B1 (ko) 복수의 서브 픽셀들을 덮는 마이크로 렌즈를 통해 발생된 광의 경로 차에 의해 깊이 데이터를 생성하는 이미지 센서 및 그 이미지 센서를 포함하는 전자 장치
JP2014011526A (ja) 画像処理装置、撮像装置および画像処理方法
CN112866549B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
WO2023087908A1 (zh) 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质
WO2021093312A1 (zh) 成像组件、对焦方法和装置、电子设备
US11659294B2 (en) Image sensor, imaging apparatus, electronic device, image processing system, and signal processing method
CN112866675B (zh) 深度图生成方法和装置、电子设备、计算机可读存储介质
US11245878B2 (en) Quad color filter array image sensor with aperture simulation and phase detection
JP6353233B2 (ja) 画像処理装置、撮像装置、及び画像処理方法
WO2023124611A1 (zh) 对焦控制方法、装置、图像传感器、电子设备和计算机可读存储介质
WO2023016183A1 (zh) 运动检测方法、装置、电子设备和计算机可读存储介质
US11431898B2 (en) Signal processing device and imaging device
Morimitsu et al. A 4M pixel full-PDAF CMOS image sensor with 1.58 μm 2× 1 On-Chip Micro-Split-Lens technology
US10205870B2 (en) Image capturing apparatus and control method thereof
CN112866554B (zh) 对焦方法和装置、电子设备、计算机可读存储介质
WO2021093528A1 (zh) 对焦方法和装置、电子设备、计算机可读存储介质
CN112866547B (zh) 对焦方法和装置、电子设备、计算机可读存储介质

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE