WO2021093312A1 - 成像组件、对焦方法和装置、电子设备 - Google Patents
成像组件、对焦方法和装置、电子设备 Download PDFInfo
- Publication number
- WO2021093312A1 WO2021093312A1 PCT/CN2020/093662 CN2020093662W WO2021093312A1 WO 2021093312 A1 WO2021093312 A1 WO 2021093312A1 CN 2020093662 W CN2020093662 W CN 2020093662W WO 2021093312 A1 WO2021093312 A1 WO 2021093312A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- phase difference
- sub
- pixel point
- brightness
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 109
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000012545 processing Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 18
- 230000011218 segmentation Effects 0.000 claims description 15
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 58
- 238000001514 detection method Methods 0.000 description 28
- 239000011159 matrix material Substances 0.000 description 18
- 230000008569 process Effects 0.000 description 10
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 238000006073 displacement reaction Methods 0.000 description 4
- 238000003491 array Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 229910044991 metal oxide Inorganic materials 0.000 description 2
- 150000004706 metal oxides Chemical class 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000011982 device technology Methods 0.000 description 1
- 229920005994 diacetyl cellulose Polymers 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/54—Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/55—Optical parts specially adapted for electronic image sensors; Mounting thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/672—Focus control based on electronic image sensor signals based on the phase difference signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
Definitions
- This application relates to the imaging field, in particular to an imaging component, a focusing method and device, electronic equipment, and a computer-readable storage medium.
- phase detection auto focus English: phase detection auto focus; referred to as PDAF.
- the traditional phase detection autofocus is to set phase detection pixel points in pairs in the pixel points included in the image sensor, where one phase detection pixel point of each phase detection pixel point pair is shielded on the left side, and the other phase detection pixel point The pixel points are shielded on the right side, so that the imaging beam directed at each phase detection pixel point pair can be separated into two parts, the left and right parts of the imaging beam, the phase difference can be obtained by comparing the images formed by the left and right imaging beams. After the phase difference is obtained, focusing can be performed based on the phase difference.
- the phase difference refers to the difference in the imaging position of the imaging light incident from different directions.
- the embodiments of the present application provide an imaging component, a focusing method and device, an electronic device, and a computer-readable storage medium, which can improve the accuracy of focusing.
- An imaging component including:
- An image sensor the image sensor includes a plurality of pixel point groups arranged in an array, and each pixel point group includes M*N pixel points arranged in an array; each pixel point corresponds to a photosensitive unit; wherein, M and N is a natural number greater than or equal to 2;
- the photosensitive unit is configured to generate a pixel signal consistent with the received light intensity through photoelectric conversion.
- An imaging device includes a lens, a filter, and an imaging component.
- the lens, the filter, and the imaging component are sequentially located on the incident light path;
- the imaging component includes an image sensor, and the image sensor includes an array arranged A plurality of pixel point groups, each of the pixel point groups includes M*N pixel points arranged in an array; each pixel point corresponds to a photosensitive unit; wherein, M and N are both natural numbers greater than or equal to 2;
- the photosensitive unit is configured to generate a pixel signal consistent with the received light intensity through photoelectric conversion.
- a focusing method applied to an electronic device includes an image sensor, the image sensor includes a plurality of pixel point groups arranged in an array, and each of the pixel point groups includes M*N pixels arranged in an array Point; each pixel point corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2; the method includes:
- phase difference value includes a phase difference value in a first direction and a phase difference value in a second direction; the first direction and the second direction form a preset angle;
- the lens is controlled to move to focus according to the defocus distance value.
- a focusing device is applied to an electronic device, the electronic device includes an image sensor, the image sensor includes a plurality of pixel point groups arranged in an array, and each of the pixel point groups includes M*N pixels arranged in an array Point; each pixel point corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2, and the device includes:
- a phase difference acquisition module for acquiring a phase difference value through the image sensor, the phase difference value including a phase difference value in a first direction and a phase difference value in a second direction; the first direction and the second direction Into a preset angle
- a processing module configured to determine a defocus distance value according to the phase difference value in the first direction and the phase difference value in the second direction;
- the control module is used to control the movement of the lens to focus according to the defocus distance value.
- An electronic device includes a memory and a processor, and a computer program is stored in the memory.
- the computer program is executed by the processor, the steps of the method are realized.
- a computer-readable storage medium has a computer program stored thereon, and the computer program implements the steps of the method when the computer program is executed by a processor.
- the above-mentioned imaging assembly, focusing method and device, electronic equipment, and computer-readable storage medium obtain the phase difference value in the first direction and the phase difference value in the second direction, and according to the phase difference value in the first direction and the second direction
- the phase difference value determines the defocus distance value, and controls the lens movement according to the defocus distance value to realize the phase detection autofocus. Because the phase difference value in the first direction and the phase difference value in the second direction can be output, it is suitable for the presence of horizontal texture or vertical Straight texture scenes can effectively use the phase difference value for focusing, which improves the accuracy and stability of focusing.
- Figure 1 is a schematic diagram of the principle of phase detection autofocus
- FIG. 2 is a schematic diagram of phase detection pixel points arranged in pairs in the pixel points included in the image sensor;
- FIG. 3 is a schematic diagram of a part of the structure of an image sensor in an embodiment
- FIG. 4 is a schematic diagram of the structure of pixels in an embodiment
- Figure 5 is a schematic structural diagram of an imaging device in an embodiment
- Fig. 6 is a schematic diagram of a filter set on a pixel point group in an embodiment
- FIG. 7 is a flowchart of a focusing method in an embodiment
- FIG. 8 is a flowchart of obtaining the phase difference in an embodiment
- Fig. 9 is a schematic diagram of a pixel point group in an embodiment
- FIG. 10 is a schematic diagram of a sub-luminance map of an embodiment
- FIG. 11 is a flowchart of obtaining a target brightness map in an embodiment
- FIG. 12 is a schematic diagram of generating a sub-brightness map corresponding to the pixel point group according to the brightness value of the sub-pixel points included in the target pixel point in the pixel point group in an embodiment
- FIG. 13 is a flowchart of obtaining a target brightness map in another embodiment
- FIG. 14 is a schematic diagram of determining pixel points at the same position from each pixel point group in an embodiment
- 15 is a schematic diagram of determining pixels at the same position from each intermediate phase difference map in an embodiment
- FIG. 16 is a schematic diagram of a target phase difference diagram in an embodiment
- FIG. 17 is a flowchart of a method for performing segmentation processing on a target brightness map to obtain a first segmented brightness map and a second segmented brightness map in an embodiment
- 18 is a schematic diagram of generating a first segmented brightness map and a second segmented brightness map according to the target brightness map in an embodiment
- 19 is a schematic diagram of generating a first segmented brightness map and a second segmented brightness map according to the target brightness map in another embodiment
- 20 is a flowchart of determining the phase difference of mutually matched pixels according to the position difference of the pixels that match each other in the first split brightness map and the second split brightness map in an embodiment
- 21 is a flowchart of determining the phase difference of the pixels that match each other according to the position difference of the pixels that match each other in the first split brightness map and the second split brightness map in another embodiment
- Fig. 22 is a structural block diagram of a focusing device in an embodiment
- FIG. 23 is a block diagram of a computer device provided by an embodiment of this application.
- FIG. 1 is a schematic diagram of the principle of phase detection auto focus (PDAF).
- M1 is the position of the image sensor when the imaging device is in focus.
- the focus state refers to the state of successful focus.
- the imaging light g reflected by the object W toward the lens Lens in different directions converges on the image sensor, that is, the imaging light g reflected by the object W toward the lens Lens in different directions is in the image
- the image is imaged at the same position on the sensor. At this time, the image of the image sensor is clear.
- M2 and M3 are the possible positions of the image sensor when the imaging device is not in focus.
- the image sensor when the image sensor is at the M2 position or the M3 position, the object W is reflected in different directions of the lens Lens.
- the imaging light g will be imaged at different positions. Please refer to Figure 1.
- the imaging light g reflected by the object W in different directions to the lens Lens is imaged at the position A and the position B respectively.
- the image sensor is at the M3 position
- the object W is reflected toward the lens.
- the imaging light g in different directions of the lens Lens is imaged at the position C and the position D respectively. At this time, the image of the image sensor is not clear.
- the difference in position of the image formed by the imaging light entering the lens from different directions in the image sensor can be obtained.
- the difference between position A and position B can be obtained, or, Obtain the difference between position C and position D; after obtaining the difference in position of the image formed by the imaging light entering the lens from different directions in the image sensor, the difference and the difference between the lens and the image sensor in the camera
- the geometric relationship is used to obtain the defocus distance.
- the so-called defocus distance refers to the distance between the current position of the image sensor and the position where the image sensor should be in the in-focus state; the imaging device can focus according to the obtained defocus distance.
- the calculated PD value is 0.
- the larger the calculated value the farther the position of the clutch focal point is, and the smaller the value, the closer the clutch focal point.
- phase detection pixel points may be provided in pairs in the pixel points included in the image sensor.
- a phase detection pixel point pair (hereinafter referred to as a pixel point pair) A may be provided in the image sensor.
- one phase detection pixel is subjected to left shielding (English: Left Shield), and the other phase detection pixel is subjected to right shielding (English: Right Shield).
- phase detection pixel point that has been blocked on the left only the right beam of the imaging beam directed to the phase detection pixel point can be in the photosensitive part of the phase detection pixel point (that is, the part that is not blocked). ).
- phase detection pixel that has been occluded on the right only the left beam of the imaging beam directed at the phase detection pixel can be in the photosensitive part of the phase detection pixel (that is, not The occluded part) is imaged. In this way, the imaging beam can be divided into left and right parts, and the phase difference can be obtained by comparing the images formed by the left and right parts of the imaging beam.
- the phase detection pixels set in the image sensor are usually blocked on the left and right sides respectively, for scenes with horizontal textures, the PD value cannot be calculated by the phase detection pixels.
- the shooting scene is a horizontal line, and the left and right images will be obtained according to the PD characteristics, but the PD value cannot be calculated.
- an imaging component is provided in the embodiment of the application, which can be used to detect the phase difference value in the first direction and the second The phase difference value of the direction, for a horizontal texture scene, the phase difference value of the second direction can be used to achieve focusing.
- the present application provides an imaging assembly.
- the imaging component includes an image sensor.
- the image sensor may be a metal oxide semiconductor device (English: Complementary Metal Oxide Semiconductor; abbreviation: CMOS) image sensor, a charge-coupled device (English: Charge-coupled Device; abbreviation: CCD), a quantum thin film sensor, or an organic sensor.
- CMOS Complementary Metal Oxide Semiconductor
- CCD Charge-coupled Device
- quantum thin film sensor or an organic sensor.
- FIG. 3 is a schematic diagram of a part of the image sensor in an embodiment.
- the image sensor 300 includes a plurality of pixel point groups Z arranged in an array, and each pixel point group Z includes a plurality of pixel points D arranged in an array, and each pixel point D corresponds to a photosensitive unit.
- the multiple pixels include M*N pixels, where both M and N are natural numbers greater than or equal to 2.
- Each pixel point D includes a plurality of sub-pixel points d arranged in an array. That is, each photosensitive unit can be composed of a plurality of photosensitive elements arranged in an array. Among them, the photosensitive element is an element that can convert light signals into electrical signals. Referring to FIG.
- each pixel point group Z includes 4 pixel points D arranged in a 2*2 array, and each pixel point D may include 4 sub-pixel points d arranged in a 2*2 array.
- the four sub-pixel points d jointly cover a microlens W.
- each pixel point D includes 2*2 photodiodes, and the 2*2 photodiodes are arranged correspondingly to the 4 sub-pixel points d arranged in a 2*2 array.
- Each photodiode is used to receive optical signals and perform photoelectric conversion, thereby converting the optical signals into electrical signals for output.
- the 4 sub-pixels d included in each pixel D are set corresponding to the same color filter, so each pixel D corresponds to a color channel, such as the red R channel, or the green channel G, or the blue channel B .
- each sub-pixel point in pixel point D can be determined The PD value (phase difference value) in the second direction.
- the signals of sub-pixel point 1 and sub-pixel point 3 are combined and output, and the signals of sub-pixel point 2 and sub-pixel point 4 are combined and output, thereby constructing two PD pixel pairs along the first direction (ie, horizontal direction).
- the phase value of can determine the PD value (phase difference value) of each sub-pixel point in the pixel point D along the first direction.
- Fig. 5 is a schematic structural diagram of an imaging device in an embodiment.
- the imaging device includes a lens 50, a filter 52 and an imaging component 54.
- the lens 50, the filter 52 and the imaging component 54 are sequentially located on the incident light path, that is, the lens 50 is disposed on the filter 52, and the filter 52 is disposed on the imaging component 54.
- the imaging component 54 includes the image sensor in FIG. 3.
- the image sensor includes a plurality of pixel point groups Z arranged in an array.
- Each pixel point group Z includes a plurality of pixel points D arranged in an array.
- Each pixel point D corresponds to a photosensitive unit, and each photosensitive unit can consist of multiple arrays. It is composed of arranged photosensitive elements.
- each pixel point D includes 4 sub-pixel points d arranged in a 2*2 array, and each sub-pixel point d corresponds to a light spot diode 542, that is, 2*2 photodiodes 542 and 2*2 arrays.
- the 4 sub-pixel points d of the cloth are correspondingly arranged.
- the 4 sub-pixel points d share one lens.
- the filter 52 can include three types of red, green, and blue, and can only transmit light of corresponding wavelengths of red, green, and blue, respectively.
- the four sub-pixel points d included in one pixel point D are arranged corresponding to the filters of the same color.
- the filter may also be white, which facilitates the passage of light in a larger spectrum (wavelength) range and increases the luminous flux passing through the white filter.
- the lens 50 is used to receive incident light and transmit the incident light to the filter 52. After the filter 52 performs filtering processing on the incident light, the filtered light is incident on the imaging component 54 on a pixel basis.
- the photosensitive unit in the image sensor included in the imaging component 54 converts the light incident from the filter 52 into a charge signal through the photoelectric effect, and generates a pixel signal consistent with the charge signal, and finally outputs an image after a series of processing.
- the pixels included in the image sensor and the pixels included in the image are two different concepts.
- the pixels included in the image refer to the smallest component unit of the image, which is generally represented by a sequence of numbers.
- the sequence of numbers can be referred to as the pixel value of a pixel.
- the embodiments of the present application involve both concepts of "pixels included in an image sensor" and "pixels included in an image”. To facilitate readers' understanding, a brief explanation is provided here.
- Fig. 6 is a schematic diagram of a filter set on a pixel point group in an embodiment.
- the pixel group Z includes 4 pixels D arranged in an array arrangement of two rows and two columns, wherein the color channel of the pixels in the first row and the first column is green, that is, the first row and the first row
- the filter set on the pixels in one column is a green filter; the color channel of the pixels in the first row and second column is red, that is, the filter set on the pixels in the first row and second column
- the filter is a red filter; the color channel of the pixel in the second row and the first column is blue, that is, the filter set on the pixel in the second row and the first column is a blue filter;
- the color channel of the pixel points in the second row and second column is green, that is, the filter set on the pixel points in the second row and second column is a green filter.
- Fig. 7 is a flowchart of a focusing method in an embodiment.
- the focusing method in this embodiment will be described by taking an example of running on the imaging device in FIG. 5.
- the focusing method includes steps 702 to 706.
- Step 702 Obtain a phase difference value through an image sensor during shooting.
- the phase difference value includes a phase difference value in a first direction and a phase difference value in a second direction.
- the first direction and the second direction form a preset angle.
- phase difference value when an image is taken by an imaging device of an electronic device, a phase difference value is acquired, and the phase difference value includes a phase difference value in the first direction and a phase difference value in the second direction.
- the first direction and the second direction may form a preset angle, and the preset angle may be any angle other than 0 degrees, 180 degrees, and 360 degrees.
- the phase difference value in the first direction refers to the phase difference value in the horizontal direction.
- the phase difference value in the second direction refers to the phase difference value in the vertical direction.
- Step 704 Determine a defocus distance value according to the phase difference value in the first direction and the phase difference value in the second direction.
- the corresponding relationship between the phase difference value and the defocus distance value can be obtained through calibration.
- defocus PD*slope(DCC), where DCC (Defocus Conversion Coefficient) is obtained by calibration, defocus is the defocus distance value, slope is the slope function; PD is the phase difference value.
- DCC Defocus Conversion Coefficient
- the calibration process of the corresponding relationship between the phase difference value and the defocus distance value includes: dividing the effective focus stroke of the camera module into 10 equal parts, namely (near focus DAC-far focus DAC)/10, so as to cover the focus of the motor Range; focus at each focus DAC (DAC can be 0 to 1023) position, and record the phase difference of the current focus DAC position; after completing the motor focus stroke, take a group of 10 focus DACs and compare the obtained PD value ; Generate 10 similar ratios K, fit the two-dimensional data composed of DAC and PD to obtain a straight line with slope K.
- Step 706 Control the lens to move to focus according to the defocus distance value.
- the phase difference value in the first direction and the phase difference value in the second direction are not 0, the confidence level of the phase difference value in the first direction and the confidence level of the phase difference value in the second direction can be obtained,
- the phase difference value with high confidence is selected as the target phase difference value, and then the corresponding defocus distance value is obtained from the mapping relationship between the phase difference value and the defocus distance value according to the determined target phase difference value.
- the confidence level is used to indicate the credibility of the phase difference calculation result.
- the phase difference of a line coordinate x in the image is calculated, and the left picture x-2, x-1, x, x+1, x+2 is a total of 5 pixels.
- Brightness value, move on the right picture, the moving range can be -10 to +10. which is:
- the similarity matching degree can be
- Similar pixel values can be used as matched pixels to obtain the phase difference.
- the brightness value of a column of pixels in the upper image can be compared with the brightness value of the same number of pixels in the lower image.
- the process of obtaining the credibility of the upper and lower images is similar to that of the left and right images, so I won't repeat them here.
- the phase difference value in the first direction is selected, and the corresponding defocus distance value is obtained according to the phase difference value in the first direction, and Make sure that the moving direction is horizontal.
- the phase difference value in the second direction is selected, and the corresponding defocus distance value is obtained according to the phase difference value in the second direction, and Make sure that the moving direction is vertical.
- the defocus distance value in the horizontal direction can be determined according to the phase difference value in the first direction, and the phase difference in the second direction
- the difference determines the defocus distance value in the vertical direction, first move according to the defocus distance value in the horizontal direction, and then move according to the defocus distance value in the vertical direction, or first move according to the defocus distance value in the vertical direction Move, and then move according to the defocus distance value in the horizontal direction.
- the PD pixel pairs in the horizontal direction cannot obtain the phase difference value in the first direction, compare the PD pixel pairs in the vertical direction to calculate the phase difference value in the second direction in the vertical direction.
- the defocus distance value is calculated according to the phase difference value in the second direction, and then the lens movement is controlled according to the defocus distance value in the vertical direction to achieve focusing.
- the PD pixel pairs in the vertical direction cannot obtain the phase difference value in the second direction, compare the PD pixel pairs in the horizontal direction to calculate the phase difference value in the first direction in the horizontal direction.
- the defocus distance value is calculated according to the phase difference value in the first direction, and then the lens movement is controlled according to the defocus distance value in the horizontal direction to achieve focusing.
- the phase difference value in the first direction and the phase difference value in the second direction are obtained, and the defocus distance value and the moving direction are determined according to the phase difference value in the first direction and the phase difference value in the second direction.
- the focus distance value and movement direction control the lens movement, and realize the phase detection autofocus. Because it can output the phase difference value in the first direction and the phase difference value in the second direction, it can be effective for scenes with horizontal texture or vertical texture.
- the phase difference value is used for focusing, which improves the accuracy and stability of focusing.
- the frequency domain algorithm and the space domain algorithm can be used to obtain the phase difference value.
- the frequency domain algorithm is to use the characteristics of Fourier displacement to calculate, the collected target brightness map is converted from the spatial domain to the frequency domain using the Fourier transform, and then the phase correlation is calculated.
- the correlation calculates the maximum value (peak), it means that it has the maximum value.
- Displacement at this time, do the inverse Fourier to know the maximum displacement in the space domain.
- the spatial domain algorithm is to find out the feature points, such as edge features, DoG (difference of Gaussian), Harris corner points and other features, and then use these feature points to calculate the displacement.
- Fig. 8 is a flow chart of obtaining the phase difference in an embodiment. As shown in Figure 8, the acquisition of the phase difference includes:
- Step 802 Obtain a target brightness map according to the brightness value of the pixel points included in each pixel point group.
- the brightness value of the pixel of the image sensor can be characterized by the brightness value of the sub-pixel included in the pixel.
- the imaging device may obtain the target brightness map according to the brightness values of the sub-pixel points in the pixel points included in each pixel point group.
- the brightness value of a sub-pixel point refers to the brightness value of the light signal received by the photosensitive element corresponding to the sub-pixel point.
- the sub-pixel included in the image sensor is a photosensitive element that can convert light signals into electrical signals. Therefore, the light signal received by the sub-pixel can be obtained according to the electrical signal output by the sub-pixel. Intensity, the brightness value of the sub-pixel can be obtained according to the intensity of the light signal received by the sub-pixel.
- the target brightness map in the embodiment of the present application is used to reflect the brightness value of the sub-pixels in the image sensor.
- the target brightness map may include multiple pixels, wherein the pixel value of each pixel in the target brightness map is based on the image sensor Obtained from the brightness value of the neutron pixel.
- Step 804 Perform segmentation processing on the target brightness map to obtain a first segmented brightness map and a second segmented brightness map, and compare the first segmented brightness map and the second segmented brightness map to each other. The position difference of the matched pixels determines the phase difference value of the mutually matched pixels.
- the imaging device may perform segmentation processing on the target luminance map along the column direction (the y-axis direction in the image coordinate system). In the process of segmenting the target luminance map along the column direction, Each dividing line of the segmentation process is perpendicular to the direction of the column.
- the imaging device may perform segmentation processing on the target brightness map along the row direction (the x-axis direction in the image coordinate system), and in the process of segmenting the target brightness map along the row direction , Each dividing line of the segmentation process is perpendicular to the direction of the row.
- the first segmented brightness map and the second segmented brightness map obtained after the target brightness map is segmented along the column direction can be referred to as the upper image and the lower image, respectively.
- the first segmented brightness map and the second segmented brightness map obtained after the target brightness map is segmented along the row direction can be called the left image and the right image, respectively.
- mutant pixels means that the pixel matrix composed of the pixel itself and the surrounding pixels are similar to each other.
- the pixel a and its surrounding pixels in the first segmented brightness map form a pixel matrix with 3 rows and 3 columns, and the pixel value of the pixel matrix is:
- the pixel b and the surrounding pixels in the second segmented brightness map also form a pixel matrix with 3 rows and 3 columns, and the pixel value of the pixel matrix is:
- the two matrices are similar, and it can be considered that the pixel a and the pixel b match each other.
- the pixel value of each corresponding pixel in the two pixel matrices can be calculated, and then the absolute value of the difference obtained is added, and the result of the addition is used to determine Whether the pixel matrix is similar, that is, if the result of the addition is less than a preset threshold, the pixel matrix is considered to be similar; otherwise, the pixel matrix is considered to be dissimilar.
- the difference of 1 and 2 the difference of 15 and 15, the difference of 70 and 70, ..., and then the absolute difference Values are added, and the result of the addition is 3. If the result of the addition of 3 is less than the preset threshold, it is considered that the two pixel matrices with 3 rows and 3 columns are similar.
- Another way to judge whether the pixel matrix is similar is to use the Sobel convolution kernel calculation method or the high Laplacian calculation method to extract the edge characteristics, and judge whether the pixel matrix is similar by the edge characteristics.
- the positional difference of pixels that match each other refers to the positions of the pixels in the first split brightness map and the positions of the pixels in the second split brightness map among the matched pixels.
- the difference As in the above example, the position difference between the pixel a and the pixel b that are matched with each other refers to the difference between the position of the pixel a in the first split brightness map and the position of the pixel b in the second split brightness map.
- the pixels that match each other correspond to different images in the image sensor formed by the imaging light entering the lens from different directions.
- the pixel a in the first split brightness map and the pixel b in the second split brightness map match each other, where the pixel a may correspond to the image formed at position A in FIG. 1, and the pixel b may correspond to The image formed at position B in Figure 1.
- the phase difference of the matched pixels can be determined according to the position difference of the matched pixels. .
- Step 806 Determine the phase difference value in the first direction or the phase difference value in the second direction according to the phase difference values of the mutually matched pixels.
- the phase difference value in the first direction can be determined according to the phase difference of the pixel a and the pixel b that are matched with each other.
- the second split brightness map includes odd-numbered columns
- pixel a in the first split brightness map and pixel b in the second split brightness map Mutual matching, based on the phase difference between the matched pixel a and the pixel b, the phase difference value in the second direction can be determined.
- the brightness value of the pixel points in the above pixel point group obtains the target brightness map.
- the phase difference value of the matching pixels can be quickly determined, and the phase difference value of the matching pixels can be quickly determined.
- the rich phase difference value can improve the accuracy of the phase difference value and improve the accuracy and stability of the focus.
- each of the pixel points includes a plurality of sub-pixel points arranged in an array, and obtaining a target brightness map according to the brightness value of the pixel points included in each pixel point group includes: According to the pixel point group, according to the brightness value of the sub-pixel point at the same position of each pixel point in the pixel point group, the sub-brightness map corresponding to the pixel point group is obtained; The sub-luminance map generates the target luminance map.
- the sub-pixel points at the same position of each pixel point refer to the sub-pixel points that are arranged in the same position in each pixel point.
- Fig. 9 is a schematic diagram of a pixel point group in an embodiment.
- the pixel point group includes 4 pixels arranged in an array arrangement of two rows and two columns.
- the sub-pixels d11, d21, d31, and d41 are arranged in the same position in each pixel, and they are all in the first row and first column.
- the sub-pixels d12, d22, d32, and d42 are in each pixel.
- the arrangement positions in are the same in the first row and second column, and the sub-pixels d13, d23, d33, and d43 are arranged in the same position in each pixel, and they are all in the second row and first column, and the sub-pixel d14 , D24, d34 and d44 are arranged in the same position in each pixel, and they are all in the second row and second column.
- obtaining the sub-luminance map corresponding to the pixel point group according to the brightness value of the sub-pixel point at the same position of each pixel point in the pixel point group may include steps A1 to A3.
- Step A1 the imaging device determines the sub-pixel point at the same position from each pixel point to obtain a plurality of sub-pixel point sets.
- the positions of the sub-pixels included in each sub-pixel set are the same in the pixel points.
- the imaging device determines the sub-pixels at the same position from D1 pixel, D2 pixel, D3 pixel and D4 pixel respectively, and can obtain 4 sub-pixel sets J1, J2, J3, and J4, among which, the sub-pixel set J1 includes sub-pixels d11, d21, d31, and d41. The positions of the sub-pixels included in the pixel are all the same in the first row and first column.
- the sub-pixel set J2 includes sub-pixels d12, d22, and d32. And d42, the positions of the sub-pixels included in it are the same in the first row and second column.
- the sub-pixel set J3 includes sub-pixels d13, d23, d33 and d43, and the sub-pixels included are The positions of the pixels are all the same, which is the second row and the first column.
- the sub-pixel set J4 includes sub-pixels d14, d24, d34, and d44. The positions of the sub-pixels included in the pixel are all the same. Two rows and second column.
- Step A2 For each sub-pixel point set, the imaging device obtains the brightness value corresponding to the sub-pixel point set according to the brightness value of each sub-pixel point in the sub-pixel point set.
- the imaging device may determine a color coefficient corresponding to each sub-pixel point in the sub-pixel point set, where the color coefficient is determined according to the color channel corresponding to the sub-pixel point.
- the sub-pixel point d11 belongs to the D1 pixel point
- the filter included in the D1 pixel point may be a green filter, that is, if the color channel of the D1 pixel point is green, then the sub-pixel point d11 included The color channel is also green, and the imaging device can determine the color coefficient corresponding to the sub-pixel d11 according to the color channel (green) of the sub-pixel d11.
- the imaging device can multiply the color coefficient corresponding to each sub-pixel in the sub-pixel set by the brightness value to obtain each sub-pixel in the sub-pixel set The weighted brightness value of the point.
- the imaging device may multiply the brightness value of the sub-pixel point d11 by the color coefficient corresponding to the sub-pixel point d11 to obtain the weighted brightness value of the sub-pixel point d11.
- the imaging device may add the weighted brightness value of each sub-pixel point in the sub-pixel point set to obtain the brightness value corresponding to the sub-pixel point set.
- the brightness value corresponding to the sub-pixel point set J1 can be calculated based on the following first formula.
- Y_TL Y_21*C_R+(Y_11+Y_41)*C_G/2+Y_31*C_B.
- Y_TL is the brightness value corresponding to the sub-pixel point set J1
- Y_21 is the brightness value of the sub-pixel point d21
- Y_11 is the brightness value of the sub-pixel point d11
- Y_41 is the brightness value of the sub-pixel point d41
- Y_31 is the sub-pixel point
- C_R is the color coefficient corresponding to sub-pixel point d21
- C_G/2 is the color coefficient corresponding to sub-pixel point d11 and d41
- C_B is the color coefficient corresponding to sub-pixel point d31
- Y_21*C_R is the sub-pixel
- the weighted brightness value of the point d21, Y_11*C_G/2 is the weighted brightness value of the sub-pixel point d11
- Y_41*C_G/2 is the weighted brightness value of the sub-pixel point d41
- Y_31*C_B is the weighted brightness value of the sub-pixel
- the brightness value corresponding to the sub-pixel point set J2 can be calculated based on the following second formula.
- Y_TR Y_22*C_R+(Y_12+Y_42)*C_G/2+Y_32*C_B.
- Y_TR is the brightness value corresponding to the sub-pixel point set J2
- Y_22 is the brightness value of the sub-pixel point d22
- Y_12 is the brightness value of the sub-pixel point d12
- Y_42 is the brightness value of the sub-pixel point d42
- Y_32 is the sub-pixel point
- C_R is the color coefficient corresponding to sub-pixel point d22
- C_G/2 is the color coefficient corresponding to sub-pixel point d12 and d42
- C_B is the color coefficient corresponding to sub-pixel point d32
- Y_22*C_R is the sub-pixel
- the weighted brightness value of the point d22, Y_12*C_G/2 is the weighted brightness value of the sub-pixel point d12
- Y_42*C_G/2 is the weighted brightness value of the sub-pixel point d42
- Y_32*C_B is the weighted brightness value of the sub-pixel
- the brightness value corresponding to the sub-pixel point set J3 can be calculated based on the following third formula.
- Y_BL Y_23*C_R+(Y_13+Y_43)*C_G/2+Y_33*C_B.
- Y_BL is the brightness value corresponding to the sub-pixel point set J3
- Y_23 is the brightness value of the sub-pixel point d23
- Y_13 is the brightness value of the sub-pixel point d13
- Y_43 is the brightness value of the sub-pixel point d43
- Y_33 is the sub-pixel point
- C_R is the color coefficient corresponding to the sub-pixel point d23
- C_G/2 is the color coefficient corresponding to the sub-pixel point d13 and d43
- C_B is the color coefficient corresponding to the sub-pixel point d33
- Y_23*C_R is the sub-pixel
- the weighted brightness value of the point d23, Y_13*C_G/2 is the weighted brightness value of the sub-pixel point d13
- Y_43*C_G/2 is the weighted brightness value of the sub-pixel point d43
- Y_33*C_B is the weighted brightness value of
- the brightness value corresponding to the sub-pixel point set J4 can be calculated based on the following fourth formula.
- Y_BR Y_24*C_R+(Y_14+Y_44)*C_G/2+Y_34*C_B.
- Y_BR is the brightness value corresponding to the sub-pixel point set J4
- Y_24 is the brightness value of the sub-pixel point d24
- Y_14 is the brightness value of the sub-pixel point d14
- Y_44 is the brightness value of the sub-pixel point d44
- Y_34 is the sub-pixel point
- C_R is the color coefficient corresponding to sub-pixel point d24
- C_G/2 is the color coefficient corresponding to sub-pixel points d14 and d44
- C_B is the color coefficient corresponding to sub-pixel point d34
- Y_24*C_R is the sub-pixel
- the weighted brightness value of the point d24, Y_14*C_G/2 is the weighted brightness value of the sub-pixel point d14
- Y_44*C_G/2 is the weighted brightness value of the sub-pixel point d44
- Y_34*C_B is the weighted brightness value of the sub-pixel
- Step A3 the imaging device generates a sub-brightness map according to the brightness value corresponding to each sub-pixel set.
- the sub-luminance map includes a plurality of pixels, each pixel in the sub-luminance map corresponds to a sub-pixel set, and the pixel value of each pixel is equal to the brightness value corresponding to the corresponding sub-pixel set.
- Fig. 10 is a schematic diagram of a sub-luminance map in an embodiment.
- the sub-luminance map includes 4 pixels.
- the pixels in the first row and the first column correspond to the sub-pixel set J1
- the pixel value is Y_TL
- the pixels in the first row and the second column correspond to the sub-pixels.
- Set J2 corresponds to the pixel value Y_TR
- the pixel in the second row and first column corresponds to the sub-pixel set J3 and its pixel value is Y_BL
- the pixel in the second row and second column corresponds to the sub-pixel set J4, which The pixel value is Y_BR.
- Fig. 11 is a flowchart of obtaining a target brightness map in an embodiment. As shown in FIG. 11, the method of obtaining the target brightness map may include the following steps:
- Step 1102 Determine target pixels from each pixel group to obtain multiple target pixels.
- the pixel point group may include a plurality of pixel points arranged in an array, and the imaging device may determine a target pixel point from the plurality of pixel points included in each pixel point group, so as to obtain a plurality of target pixel points.
- the imaging device may determine a pixel with a green color channel from each pixel group (that is, a pixel with a green filter included), and then, the color channel is a green The pixel is determined as the target pixel.
- the pixels with the green color channel have better photosensitive performance, the pixels with the green color channel in the pixel group are determined as the target pixel, and the target brightness map generated according to the target pixel in the subsequent steps is of higher quality .
- Step 1104 Generate a sub-brightness map corresponding to each pixel point group according to the brightness value of the sub-pixel points included in each target pixel point.
- the sub-luminance map corresponding to each pixel point group includes a plurality of pixels, and each pixel in the sub-luminance map corresponding to each pixel point group corresponds to a sub-pixel point included in the target pixel point in the pixel point group.
- the pixel value of each pixel in the sub-luminance map corresponding to each pixel point group is the brightness value of the corresponding sub-pixel point.
- FIG. 12 is a schematic diagram of generating the sub-luminance map L corresponding to the pixel point group Z1 according to the brightness values of the sub-pixel points included in the target pixel point DM in the pixel point group Z1 in an embodiment.
- the sub-luminance map L includes 4 pixels, where each pixel corresponds to a sub-pixel included in the target pixel DM, and the pixel value of each pixel is the value of the corresponding sub-pixel.
- the brightness value where the pixels in the first row and first column of the sub-luminance map L correspond to the sub-pixels in the first row and first column included in the target pixel point DM, and the first row and first column in the sub-luminance map L
- the pixel value Gr_TL of the pixel in the target pixel DM includes the brightness value Gr_TL of the sub-pixel in the first row and the first column of the target pixel DM
- the pixel in the first row and second column in the sub-luminance map L and the pixel in the first row and second column of the target pixel DM include the first The sub-pixels in one row and the second column correspond, and the pixel value Gr_TR of the pixel in the first row and second column in the sub-luminance map L is the brightness value Gr_TR of the
- the pixels in the second row and the second column of the sub-brightness map L and the target pixel DM include the second row of the second pixel.
- the sub-pixels in the column correspond to the pixel value Gr_BR of the pixel in the second row and second column of the sub-brightness map L as the brightness value Gr_BR of the sub-pixel in the second row and second column included in the target pixel DM.
- Step 1106 Generate a target brightness map according to the sub-brightness map corresponding to each pixel point group.
- the imaging device can splice the sub-luminance maps corresponding to each pixel point group according to the array arrangement of each pixel point group in the image sensor to obtain the target luminance map.
- Fig. 13 is a flowchart of obtaining a target brightness map in another embodiment. As shown in FIG. 13, the method of obtaining the target brightness map may include the following steps:
- Step 1302 Determine the pixel point at the same position from each pixel point group to obtain a plurality of pixel point sets.
- the positions of the pixels included in each pixel point set in the pixel point group are all the same.
- the imaging device determines the pixel points at the same position from the pixel point group Z1, the pixel point group Z2, the pixel point group Z3, and the pixel point group Z4, and four pixel point sets P1, P2, P3 can be obtained.
- the pixel point set P1 includes pixels D11, D21, D31, and D41, the positions of the pixels included in the pixel point group are all the same in the first row and the first column
- the pixel point set P2 includes pixels D12, D22, D32 and D42, the positions of the pixels included in the pixel group are all the same in the first row and second column.
- the pixel set P3 includes sub-pixels D13, D23, D33 and D43, which include The positions of the pixels in the pixel group are the same, which is the second row and the first column.
- the pixel group P4 includes the pixels D14, D24, D34 and D44, and the positions of the pixels included in the pixel group are all the same. For the second row and second column.
- Step 1304 The imaging device generates multiple target brightness maps one-to-one corresponding to the multiple pixel point sets according to the brightness values of the pixels in the multiple pixel point sets.
- the brightness value of the pixel of the image sensor can be characterized by the brightness value of the sub-pixels included in the pixel. Therefore, for each pixel set, the imaging device can be based on each pixel in the pixel set. The brightness value of each sub-pixel included in the pixel generates a target brightness map corresponding to the set of pixel points.
- the target brightness map corresponding to a certain pixel point set includes a plurality of pixels, each pixel in the target brightness map corresponds to a sub-pixel point of the pixel points included in the pixel point set, and the target brightness map The pixel value of each pixel is the brightness value of the corresponding sub-pixel.
- the imaging device determines a pixel point (that is, the target pixel point) from each pixel point group, and generates the target brightness map according to the determined pixel point, in other words, In the second method of acquiring the target brightness map, the imaging device generates a target brightness map according to one pixel in each pixel point group.
- the imaging device In the method of obtaining the target brightness map in FIG. 13, the imaging device generates a target brightness map according to one pixel in each pixel point group, and generates another one according to another pixel point in each pixel point group.
- the target brightness map, and at the same time, another target brightness map is generated according to another pixel in each pixel point group, and so on.
- the number of target brightness maps acquired by the imaging device is the same as the number of pixel points included in the pixel point group.
- the imaging device After obtaining multiple target brightness maps, for each target brightness map, the imaging device performs segmentation processing on the target brightness map, and obtains the first segmented brightness map and the second segmented brightness map according to the segmentation processing results.
- the imaging device can obtain the intermediate phase difference map according to the phase difference of the matching pixels in the first segment brightness map and the second segment brightness map corresponding to the target brightness map, and then the imaging device
- the target phase difference map can be obtained according to the intermediate phase difference map corresponding to each target brightness map.
- the accuracy of obtaining the target phase difference map is relatively high.
- the pixel point group includes 4 pixels
- the accuracy of the target phase difference map obtained in this way is obtained by the second method of obtaining the target brightness map. 4 times the accuracy of the target phase difference map.
- the embodiment of the present application will describe the technical process of obtaining the target phase difference map according to the intermediate phase difference map corresponding to each target brightness map, and the technical process may include step B1 to step B3.
- Step B1 The imaging device determines pixels at the same position from each intermediate phase difference map to obtain a plurality of phase difference pixel sets.
- the positions of the pixels included in each phase difference pixel set in the intermediate phase difference map are all the same.
- the imaging device determines the pixels at the same position from the intermediate phase difference diagram 1, the intermediate phase difference diagram 2, the intermediate phase difference diagram 3, and the intermediate phase difference diagram 4 respectively, and can obtain 4 phase difference pixel sets Y1, Y2 , Y3, and Y4, where the phase difference pixel set Y1 includes the pixel PD_Gr_1 in the intermediate phase difference diagram 1, the pixel PD_R_1 in the intermediate phase difference diagram 2, the pixel PD_B_1 in the intermediate phase difference diagram 3, and the pixel PD_B_1 in the intermediate phase difference diagram 4
- the pixel PD_Gb_1, the phase difference pixel set Y2 includes the pixel PD_Gr_2 in the intermediate phase difference diagram 1, the pixel PD_R_2 in the intermediate phase difference diagram 2, the pixel PD_B_2 in the intermediate phase difference diagram 3, and the pixel PD_Gb_2 in the intermediate phase difference diagram 4.
- the difference pixel set Y3 includes the pixel PD_Gr_3 in the intermediate phase difference diagram 1, the pixel PD_R_3 in the intermediate phase difference diagram 2, the pixel PD_B_3 in the intermediate phase difference diagram 3, and the pixel PD_Gb_3 in the intermediate phase difference diagram 4, and the phase difference pixel set Y4 It includes the pixel PD_Gr_4 in the intermediate phase difference diagram 1, the pixel PD_R_4 in the intermediate phase difference diagram 2, the pixel PD_B_4 in the intermediate phase difference diagram 3, and the pixel PD_Gb_4 in the intermediate phase difference diagram 4.
- Step B2 For each phase difference pixel set, the imaging device stitches the pixels in the phase difference pixel set to obtain a sub-phase difference map corresponding to the phase difference pixel set.
- the sub-phase difference map includes a plurality of pixels, each pixel corresponds to a pixel in the phase difference pixel set, and the pixel value of each pixel is equal to the pixel value of the corresponding pixel.
- Step B3 the imaging device stitches the obtained multiple sub-phase difference maps to obtain the target phase difference map.
- FIG. 16 is a schematic diagram of a target phase difference diagram, where the target phase difference diagram includes subphase difference diagram 1, subphase difference diagram 2, subphase difference diagram 3, and subphase difference diagram 4, where the subphase difference diagram Figure 1 corresponds to the phase difference pixel set Y1, the sub phase difference diagram 2 corresponds to the phase difference pixel set Y2, the sub phase difference diagram 3 corresponds to the phase difference pixel set Y3, and the sub phase difference diagram 4 corresponds to the phase difference pixel set Y4 Corresponding.
- FIG. 17 is a flowchart of a method of performing segmentation processing on the target brightness map to obtain the first segmented brightness map and the second segmented brightness map in an embodiment, which can be applied to the imaging device shown in FIG. 3, as shown in FIG. As shown in 17, this method can include the following steps:
- Step 1702 Perform segmentation processing on the target brightness map to obtain multiple brightness map regions.
- each brightness map area includes a row of pixels in the target brightness map, or each brightness map area includes a column of pixels in the target brightness map.
- the imaging device may segment the target brightness map column by column along the row direction to obtain multiple pixel columns of the target brightness map (that is, the brightness map area described above).
- the imaging device may segment the target brightness map row by row along the column direction to obtain multiple pixel rows of the target brightness map (that is, the brightness map area described above).
- Step 1704 Obtain multiple first brightness map regions and multiple second brightness map regions from the multiple brightness map regions.
- the first luminance map area includes pixels in even-numbered rows in the target luminance map, or the first luminance map area includes pixels in even-numbered columns in the target luminance map.
- the second brightness map area includes pixels in odd rows in the target brightness map, or the second brightness map area includes pixels in odd columns in the target brightness map.
- the imaging device may determine the even-numbered columns as the first luminance map area and the odd-numbered columns as the second luminance map area.
- the imaging device may determine even-numbered rows as the first luminance map area, and odd-numbered rows as the second luminance map area.
- Step 1706 Use a plurality of first brightness map regions to form a first segmented brightness map, and use a plurality of second brightness map regions to form a second segmented brightness map.
- the imaging device can divide the first column pixel, the third column pixel, and the fifth column of the target brightness map.
- the column of pixels is determined as the second luminance map area
- the second, fourth, and sixth column pixels of the target luminance map can be determined as the first luminance map area
- the imaging device can perform the first luminance map area Splicing to obtain the first sub-brightness map T1
- the first sub-brightness map T1 includes the second, fourth, and sixth columns of the target brightness map
- the imaging device can splice the second brightness map area
- a second segmented brightness map T2 is obtained, and the second segmented brightness map T2 includes the first column of pixels, the third column of pixels, and the fifth column of pixels of the target brightness map.
- the imaging device can divide the first row of pixels, the third row of pixels, and the fifth row of the target brightness map. Row pixels are determined as the second luminance map area, and the second row of pixels, the fourth row of pixels, and the sixth row of pixels of the target luminance map can be determined as the first luminance map area. Then, the imaging device can perform the first luminance map area.
- the first sub-brightness map T3 includes the second row of pixels, the fourth row of pixels, and the sixth row of pixels of the target brightness map, and the imaging device can splice the second brightness map area ,
- a second segmented brightness map T4 is obtained, and the second segmented brightness map T4 includes the first row of pixels, the third row of pixels, and the fifth row of pixels of the target brightness map.
- this method may include the following steps:
- step 2002 when the luminance map area includes a row of pixels in the target luminance map, a first set of adjacent pixels is determined in each row of pixels included in the first segmented luminance map.
- the pixels included in the first adjacent pixel set correspond to the same pixel point group in the image sensor.
- the luminance map area includes a row of pixels in the target luminance map, that is, when the imaging device divides the target luminance map row by row along the column direction.
- the two pixels in the first row of the sub-luminance map will be located in the same brightness map area. And will be located in the same sub-brightness map.
- the two pixels in the second row of the sub-brightness map will also be located in the same brightness area, and will be located in another sub-brightness map, assuming this sub-brightness map
- the first row of is located in the even-numbered pixel row of the target brightness map, then the two pixels in the first row of the sub-brightness map are located in the first sub-brightness map, and the two pixels in the second row of the sub-brightness map are located in the first sub-brightness map.
- Two-division brightness map Two-division brightness map.
- the imaging device can determine the two pixels in the first row of the sub-brightness map as the first adjacent pixel set, because the two pixels in the first row of the sub-brightness map are the same pixel group in the image sensor (Fig. The pixel point group shown in 8) corresponds.
- step 2004, for each first set of neighboring pixels the imaging device searches for a first set of matching pixels corresponding to the first set of neighboring pixels in the second segmented brightness map.
- the imaging device may obtain a plurality of pixels around the first set of neighboring pixels in the first sub-brightness map, and determine the number of pixels around the first set of neighboring pixels and the set of neighboring pixels.
- a plurality of pixels form a search pixel matrix.
- the search pixel matrix may include 9 pixels in 3 rows and 3 columns.
- the imaging device may search for a pixel matrix similar to the search pixel matrix in the second segmented brightness map.
- how to determine whether the pixel matrices are similar it has been described above, and the details of the embodiment of the present application are not repeated here.
- the imaging device may extract the first matching pixel set from the searched pixel matrix.
- the pixels in the first matching pixel set and the pixels in the first adjacent pixel set obtained through the search respectively correspond to different images in the image sensor formed by imaging light entering the lens from different directions.
- Step 2006 Determine the phase difference between the first adjacent pixel set and the first matched pixel set corresponding to each other according to the position difference between each first adjacent pixel set and each first matched pixel set, to obtain a phase difference value in the second direction .
- the position difference between the first set of adjacent pixels and the first set of matched pixels refers to the difference between the position of the first set of adjacent pixels in the first segmented luminance map and the position of the first set of matched pixels in the second segmented luminance map. difference.
- the phase difference obtained through the upper image and the lower image can reflect the difference in the imaging position of the object in the vertical direction.
- this method may include the following steps:
- Step 2102 When the brightness map area includes a column of pixels in the target brightness map, determine a second set of neighboring pixels in each column of pixels included in the first split brightness map, where the pixels included in the second set of neighboring pixels are the same The pixel point group corresponds.
- Step 2104 For each second set of neighboring pixels, search for a second set of matching pixels corresponding to the second set of neighboring pixels in the second segmented brightness map.
- Step 2106 Determine the phase difference between the second adjacent pixel set and the second matched pixel set corresponding to each other according to the position difference between each second adjacent pixel set and each second matched pixel set to obtain the phase difference value in the first direction .
- step 2102 to step 2104 is the same as the technical process from step 2002 to step 2006, and will not be repeated here in the embodiment of the present application.
- the obtained first segmented brightness map and the second segmented brightness map can be called the left image and the right image, respectively, and the phase obtained by the left image and the right image
- the difference can reflect the difference in the imaging position of the object in the horizontal direction.
- the acquired phase difference can reflect the difference in the imaging position of the object in the horizontal direction.
- the luminance map area includes a row of pixels in the target luminance map
- the acquired phase difference The difference can reflect the difference in the imaging position of the object in the vertical direction. Therefore, the phase difference obtained according to the embodiment of the present application can reflect the difference in the imaging position of the object in the vertical direction as well as the difference in the imaging position of the object in the horizontal direction. , So its accuracy is higher.
- the aforementioned focusing method may further include: generating a depth value according to the defocus distance value.
- the defocus distance value can calculate the image distance in the in-focus state, and the object distance can be obtained according to the image distance and the focal length.
- the object distance is the depth value.
- Fig. 22 is a structural block diagram of a focusing device according to an embodiment. As shown in FIG. 22, the focusing device is applied to an electronic device.
- the electronic device includes an image sensor.
- the image sensor includes a plurality of pixel point groups arranged in an array, and each pixel point group includes M *N pixels; each pixel corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2, including a phase difference acquisition module 2210, a processing module 2212, and a control module 2214.
- the phase difference obtaining module 2210 is configured to obtain a phase difference value through the image sensor, the phase difference value including a phase difference value in a first direction and a phase difference value in a second direction, the first direction and the second direction Into a preset angle.
- the processing module 2212 is configured to determine the defocus distance value according to the phase difference value in the first direction and the phase difference value in the second direction.
- the control module 2214 is configured to control the lens to move to focus according to the defocus distance value.
- the above-mentioned focusing device obtains the phase difference value in the first direction and the phase difference value in the second direction, and determines the defocus distance value and the moving direction according to the phase difference value in the first direction and the phase difference value in the second direction.
- the focus distance value and movement direction control the lens movement, and realize the phase detection autofocus. Because it can output the phase difference value in the first direction and the phase difference value in the second direction, it can be effective for scenes with horizontal texture or vertical texture.
- the phase difference value is used for focusing, which improves the accuracy and stability of focusing.
- the processing module 2212 is further configured to obtain the confidence of the phase difference value in the first direction and the confidence of the phase difference value in the second direction; select the phase difference value in the first direction and the phase difference in the second direction Among the values, a phase difference value with a high degree of confidence is used as the target phase difference value, and the corresponding defocus distance value is determined from the corresponding relationship between the phase difference value and the defocus distance value according to the target phase difference value.
- the phase difference acquisition module 2210 includes a brightness determination unit and a phase difference determination unit.
- the brightness determining unit is configured to obtain a target brightness map according to the brightness value of the pixel points included in each pixel point group.
- the phase difference determining unit is configured to perform segmentation processing on the target brightness map to obtain a first segmented brightness map and a second segmented brightness map, and according to the first segmented brightness map and the second segmented brightness map Determine the phase difference value of the mutually matched pixels in the brightness map to determine the phase difference value of the mutually matched pixels; determine the phase difference value in the first direction or the phase difference value in the second direction according to the phase difference value of the mutually matched pixels .
- the brightness determining unit is further configured to, for each pixel point group, obtain the pixel point group according to the brightness value of the sub-pixel point at the same position of each pixel point in the pixel point group Corresponding sub-brightness map; generating the target brightness map according to the sub-brightness map corresponding to each pixel point group.
- the brightness determining unit is further configured to determine the sub-pixel points at the same position from each of the pixel points to obtain a plurality of sub-pixel point sets, wherein the sub-pixel points included in each sub-pixel point set are The positions of the points in the pixel points are all the same; for each sub-pixel point set, according to the brightness value of each sub-pixel point in the sub-pixel point set, the brightness value corresponding to the sub-pixel point set is obtained; The brightness values corresponding to the sub-pixel sets generate the sub-brightness map.
- the brightness determination unit is further configured to determine a color coefficient corresponding to each sub-pixel point in the sub-pixel point set, and the color coefficient is determined according to the color channel corresponding to the sub-pixel point; Multiply the color coefficient corresponding to each sub-pixel point in the point set by the brightness value to obtain the weighted brightness of each sub-pixel point in the sub-pixel point set; add the weighted brightness of each sub-pixel point in the sub-pixel point set To obtain the brightness value corresponding to the set of sub-pixel points.
- each of the pixel points includes a plurality of sub-pixel points arranged in an array
- the brightness determining unit is further configured to determine a target pixel point from each of the pixel point groups to obtain a plurality of the target pixel points; generate each of the target pixel points according to the brightness value of the sub-pixel points included in each target pixel point A sub-luminance map corresponding to the pixel point group; generating the target brightness map according to the sub-luminance map corresponding to each pixel point group.
- the brightness determining unit is further configured to determine a pixel with a green color channel from each of the pixel point groups; and determine a pixel with a green color channel as the target pixel.
- the brightness determining unit is further configured to determine the pixel points at the same position from each of the pixel point groups to obtain a plurality of pixel point sets, wherein the pixel points included in each pixel point set are The positions in the pixel point groups are all the same; according to the brightness values of the pixel points in the plurality of pixel point sets, generating a plurality of the target brightness maps corresponding to the plurality of pixel point sets one-to-one;
- the phase difference determining unit is further configured to generate an intermediate phase difference map corresponding to the target brightness map according to the phase difference of the mutually matched pixels for each of the target brightness maps;
- the intermediate phase difference map generates the phase difference value in the first direction and the phase difference value in the second direction.
- the phase difference determining unit is further configured to determine pixels at the same position from each of the intermediate phase difference maps to obtain a plurality of phase difference pixel sets, wherein each of the phase difference pixel sets includes The positions of the pixels in the intermediate phase difference diagram are all the same;
- phase difference pixel set For each phase difference pixel set, splicing pixels in the phase difference pixel set to obtain a sub-phase difference map corresponding to the phase difference pixel set;
- the obtained multiple sub-phase difference maps are spliced to obtain a target phase difference map, and the target phase difference map includes a phase difference value in a first direction and a phase difference value in a second direction.
- the phase difference determining unit is further configured to perform segmentation processing on the target brightness map to obtain multiple brightness map regions, each of the brightness map regions includes a row of pixels in the target brightness map, or , Each of the brightness map regions includes a column of pixels in the target brightness map;
- the first brightness map region includes pixels in even rows of the target brightness map, or the first brightness map region
- a luminance map area includes pixels in even-numbered columns in the target luminance map
- the second luminance map area includes pixels in odd rows in the target luminance map
- the second luminance map area includes the target luminance map Pixels in odd columns
- the multiple first brightness map regions are used to form the first split brightness map, and the multiple second brightness map regions are used to form the second split brightness map.
- the phase difference determining unit is further configured to determine the first neighboring pixel in each row of pixels included in the first divided brightness map when the brightness map area includes a row of pixels in the target brightness map.
- a pixel set, the pixels included in the first adjacent pixel set correspond to the same pixel point group;
- the phase difference determining unit is further configured to determine a second neighboring pixel in each column of pixels included in the first divided brightness map when the brightness map area includes a column of pixels in the target brightness map.
- a pixel set, the pixels included in the second adjacent pixel set correspond to the same pixel point group;
- each second adjacent pixel set and each second matched pixel set determine the phase difference between the second adjacent pixel set and the second matched pixel set corresponding to each other to obtain the first The phase difference of the direction.
- the division of the modules in the above-mentioned focusing device is only used for illustration. In other embodiments, the focusing device can be divided into different modules as required to complete all or part of the functions of the above-mentioned focusing device.
- FIG. 23 is a schematic diagram of the internal structure of an electronic device in an embodiment.
- the electronic device includes a processor and a memory connected through a system bus.
- the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
- the memory may include a non-volatile storage medium and internal memory.
- the non-volatile storage medium stores an operating system and a computer program.
- the computer program can be executed by a processor to implement a focusing method provided in the following embodiments.
- the internal memory provides a cached operating environment for the operating system computer program in the non-volatile storage medium.
- the electronic device can be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
- each module in the focusing device provided in the embodiment of the present application may be in the form of a computer program.
- the computer program can be run on a terminal or a server.
- the program module composed of the computer program can be stored in the memory of the terminal or the server.
- the computer program When the computer program is executed by the processor, it implements the steps of the method described in the embodiments of the present application.
- the embodiment of the present application also provides a computer-readable storage medium.
- One or more non-volatile computer-readable storage media containing computer-executable instructions when the computer-executable instructions are executed by one or more processors, cause the processors to perform the steps of the focusing method.
- a computer program product containing instructions that, when run on a computer, causes the computer to execute the focusing method.
- Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory may include random access memory (RAM), which acts as external cache memory.
- RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
- SRAM static RAM
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDR SDRAM double data rate SDRAM
- ESDRAM enhanced SDRAM
- SLDRAM synchronous Link (Synchlink) DRAM
- Rambus direct RAM
- DRAM direct memory bus dynamic RAM
- RDRAM memory bus dynamic RAM
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Focusing (AREA)
Abstract
本申请实施例涉及一种成像组件、对焦方法和装置、电子设备、计算机可读存储介质。该对焦方法包括:拍摄时获取相位差值,所述相位差值包括第一方向的相位差值和第二方向的相位差值,第一方向与第二方向成预设角度;根据所述第一方向的相位差值和第二方向的相位差值确定离焦距离值;根据所述离焦距离值控制镜头移动以对焦。
Description
相关申请的交叉引用
本申请要求于2019年11月12日提交中国专利局、申请号为2019111027250、发明名称为“成像组件对焦方法和装置、电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及影像领域,特别是涉及一种成像组件、对焦方法和装置、电子设备、计算机可读存储介质。
随着电子设备技术的发展,越来越多的用户通过电子设备拍摄图像。为了保证拍摄的图像清晰,通常需要对电子设备的摄像模组进行对焦,即通过调节镜头与图像传感器之间的距离,以使拍摄对象在焦平面上。传统的对焦方式包括相位检测自动对焦(英文:phase detection auto focus;简称:PDAF)。
传统的相位检测自动对焦,是在图像传感器包括的像素点中成对地设置相位检测像素点,其中,每个相位检测像素点对中的一个相位检测像素点进行左侧遮挡,另一个相位检测像素点进行右侧遮挡,这样,就可以将射向每个相位检测像素点对的成像光束分离成左右两个部分,通过对比左右两部分成像光束所成的像,即可得到相位差,在得到相位差后即可根据该相位差进行对焦。其中,相位差是指从不同方向射入的成像光线在成像位置上的差异。
然而,上述通过在图像传感器中设置相位检测像素点方式进行对焦的准确度不高。
发明内容
本申请实施例提供一种成像组件、对焦方法和装置、电子设备、计算机可读存储介质,可以提高对焦的准确度。
一种成像组件,包括:
图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元;其中,M和N均为大于或等于2的自然数;
所述感光单元,被配置成通过光电转换生成与接收的光强度相一致的像素信号。
一种成像设备,包括透镜、滤光片,还包括成像组件,所述透镜、滤光片和成像组件依次位于入射光路上;所述成像组件包括图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元;其中,M和N均为大于或等于2的自然数;
所述感光单元,被配置成通过光电转换生成与接收的光强度相一致的像素信号。
一种对焦方法,应用于电子设备,所述电子设备包括图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数;所述方法包括:
拍摄时通过所述图像传感器获取相位差值,所述相位差值包括第一方向的相位差值和第二方向的相位差值;所述第一方向与所述第二方向成预设角度;
根据所述第一方向的相位差值和第二方向的相位差值确定离焦距离值;
根据所述离焦距离值控制镜头移动以对焦。
一种对焦装置,应用于电子设备,所述电子设备包括图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数,所述装置包括:
相位差获取模块,用于通过所述图像传感器获取相位差值,所述相位差值包括第一方向的相位差值和第二方向的相位差值;所述第一方向与所述第二方向成预设角度;
处理模块,用于根据所述第一方向的相位差值和第二方向的相位差值确定离焦距离值;
控制模块,用于根据所述离焦距离值控制镜头移动以对焦。
一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,实现所述的方法的步骤。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现所述的方法的步骤。
上述成像组件、对焦方法和装置、电子设备、计算机可读存储介质,通过获取第一方向的相位差值和第二方向的相位差值,并根据第一方向的相位差值和第二方向的相位差值确定离焦距离值,根据离焦距离值控制镜头移动,实现了相位检测自动对焦,因可以输出第一方向的相位差值和第二方向的相位差值,针对存在水平纹理或竖直纹理的场景都可以有效的利用相位差值进行对焦,提高了对焦的准确度和稳定度。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为相位检测自动对焦的原理示意图;
图2为在图像传感器包括的像素点中成对地设置相位检测像素点的示意图;
图3为一个实施例中图像传感器的部分结构示意图;
图4为一个实施例中像素点的结构示意图;
图5为一个实施例中成像设备的结构示意图;
图6为一个实施例中像素点组上设置滤光片的示意图;
图7为一个实施例中对焦方法的流程图;
图8为一个实施例中获取相位差的流程图;
图9为一个实施例中像素点组的示意图;
图10为一个实施例中子亮度图的示意图;
图11为一个实施例中获取目标亮度图的流程图;
图12为一个实施例中根据像素点组中的目标像素点包括的子像素点的亮度值生成该像素点组对应的子亮度图的示意;
图13为另一个实施例中获取目标亮度图的流程图;
图14为一个实施例中从各个像素点组中确定相同位置处的像素点的示意图;
图15为一个实施例中从各个中间相位差图中确定相同位置处的像素的示意图;
图16为一个实施例中目标相位差图的示意图;
图17为一个实施例中对目标亮度图进行切分处理,得到第一切分亮度图和第二切分亮度图的方法的流程图;
图18为一个实施例中根据目标亮度图生成第一切分亮度图和第二切分亮度图的示意图;
图19为另一个实施例中根据目标亮度图生成第一切分亮度图和第二切分亮度图的示意图;
图20为一个实施例中根据第一切分亮度图和第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差的流程图;
图21为另一个实施例中根据第一切分亮度图和第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差的流程图;
图22为一个实施例中对焦装置的结构框图;
图23为本申请实施例提供的一种计算机设备的框图。
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
图1为相位检测自动对焦(phase detection auto focus,PDAF)的原理示意图。如图1所示,M1为 成像设备处于合焦状态时,图像传感器所处的位置,其中,合焦状态指的是成功对焦的状态。当图像传感器位于M1位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在图像传感器上会聚,也即是,由物体W反射向镜头Lens的不同方向上的成像光线g在图像传感器上的同一位置处成像,此时,图像传感器成像清晰。
M2和M3为成像设备不处于合焦状态时,图像传感器所可能处于的位置,如图1所示,当图像传感器位于M2位置或M3位置时,由物体W反射向镜头Lens的不同方向上的成像光线g会在不同的位置成像。请参考图1,当图像传感器位于M2位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在位置A和位置B分别成像,当图像传感器位于M3位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在位置C和位置D分别成像,此时,图像传感器成像不清晰。
在PDAF技术中,可以获取从不同方向射入镜头的成像光线在图像传感器中所成的像在位置上的差异,例如,如图1所示,可以获取位置A和位置B的差异,或者,获取位置C和位置D的差异;在获取到从不同方向射入镜头的成像光线在图像传感器中所成的像在位置上的差异之后,可以根据该差异以及摄像机中镜头与图像传感器之间的几何关系,得到离焦距离,所谓离焦距离指的是图像传感器当前所处的位置与合焦状态时图像传感器所应该处于的位置的距离;成像设备可以根据得到的离焦距离进行对焦。
由此可知,合焦时,计算得到的PD值为0,反之算出的值越大,表示离合焦点的位置越远,值越小,表示离合焦点越近。采用PDAF对焦时,通过计算出PD值,再根据标定得到PD值与离焦距离之间的对应关系,可以求得离焦距离,然后根据离焦距离控制镜头移动达到合焦点,以实现对焦。
相关技术中,可以在图像传感器包括的像素点中成对地设置一些相位检测像素点,如图2所示,图像传感器中可以设置有相位检测像素点对(以下称为像素点对)A,像素点对B和像素点对C。其中,在每个像素点对中,一个相位检测像素点进行左侧遮挡(英文:Left Shield),另一个相位检测像素点进行右侧遮挡(英文:Right Shield)。
对于进行了左侧遮挡的相位检测像素点而言,射向该相位检测像素点的成像光束中仅有右侧的光束才能在该相位检测像素点的感光部分(也即是未被遮挡的部分)上成像,对于进行了右侧遮挡的相位检测像素点而言,射向该相位检测像素点的成像光束中仅有左侧的光束才能在该相位检测像素点的感光部分(也即是未被遮挡的部分)上成像。这样,就可以将成像光束分为左右两个部分,通过对比左右两部分成像光束所成的像,即可得到相位差。
然而,由于图像传感器中设置的相位检测像素点通常是左侧和右侧分别遮挡,因此,对于存在水平纹理的场景,通过相位检测像素点无法计算得到PD值。例如拍摄场景为一条水平线,根据PD特性会得到左右两张图像,但无法计算出PD值。
为了解决相位检测自动对焦针对一些水平纹理的场景无法计算得出PD值实现对焦的情况,本申请实施例中提供了一种成像组件,可以用来检测输出第一方向的相位差值和第二方向的相位差值,针对水平纹理场景,可采用第二方向的相位差值来实现对焦。
在一个实施例中,本申请提供了一种成像组件。成像组件包括图像传感器。图像传感器可以为金属氧化物半导体元件(英文:Complementary Metal Oxide Semiconductor;简称:CMOS)图像传感器、电荷耦合元件(英文:Charge-coupled Device;简称:CCD)、量子薄膜传感器或者有机传感器等。
图3为一个实施例中图像传感器的一部分的结构示意图。图像传感器300包括阵列排布的多个像素点组Z,每个像素点组Z包括阵列排布的多个像素点D,每个像素点D对应一个感光单元。多个像素点包括M*N个像素点,其中,M和N均为大于或等于2的自然数。每个像素点D包括阵列排布的多个子像素点d。也就是每个感光单元可以由多个阵列排布的感光元件组成。其中,感光元件是一种能够将光信号转化为电信号的元件。参图3,每个像素点D中阵列排布的多个子像素点d上共同覆盖一个微透镜W。在一个实施例中,感光元件可为光电二极管。本实施例中,每个像素点组Z包括2*2阵列排布的4个像素点D,每个像素点D可包括2*2阵列排布的4个子像素点d。4个子像素点d上共同覆盖一个微透镜W。其中,每个像素点D包括2*2个光电二极管,2*2个光电二极管与2*2阵列排布的4个子像素点d对应设置。每个光电二极管用于接收光信号并进行光电转换,从而将光信号转换为电信号输出。每个像素点D所包括的4个子像素点d与同一颜色的滤光片对应设置,因此每个像素点D对应于一个颜色通道,比如红色R通道,或者绿色通道G,或者蓝色通道B。
如图4所示,以每个像素点D包括子像素点1、子像素点2、子像素点3和子像素点4为例,可将子像素点1和子像素点2信号合并输出,子像素点3和子像素点4信号合并输出,从而构造成沿着第二方向(即竖直方向)的两个PD像素对,根据两个PD像素对的相位值可以确定像素点D内各子像素点沿第二方向的PD值(相位差值)。将子像素点1和子像素点3信号合并输出,子像素点2和子像素点4信号合并输出,从而构造沿着第一方向(即水平方向)的两个PD像素对,根据两个PD像素对的相位值可以确定像素点D内各子像素点沿第一方向的PD值(相位差值)。
图5为一个实施例中成像设备的结构示意图。如图5所示,该成像设备包括透镜50、滤光片52和成像组件54。透镜50、滤光片52和成像组件54依次位于入射光路上,即透镜50设置在滤光片52之上,滤光片52设置在成像组件54上。
成像组件54包括图3中的图像传感器。图像传感器包括阵列排布的多个像素点组Z,每个像素点组Z包括阵列排布的多个像素点D,每个像素点D对应一个感光单元,每个感光单元可以由多个阵列排布的感光元件组成。本实施例中,每个像素点D包括2*2阵列排布的4个子像素点d,每个子像素点d对应一个光点二极管542,即2*2个光电二极管542与2*2阵列排布的4个子像素点d对应设置。4个子像素点d共用一个透镜。
滤光片52可包括红、绿、蓝三种,分别只能透过红色、绿色、蓝色对应波长的光线。一个像素点D所包括的4个子像素点d与同一颜色的滤光片对应设置。在其他实施例中,滤光片也可以是白色,方便较大光谱(波长)范围的光线通过,增加透过白色滤光片的光通量。
透镜50用于接收入射光,并将入射光传输给滤光片52。滤光片52对入射光进行滤波处理后,将滤波处理后的光以像素为基础入射到成像组件54上。
成像组件54包括的图像传感器中的感光单元通过光电效应将从滤光片52入射的光转换成电荷信号,并生成与电荷信号一致的像素信号,经过一系列处理后最终输出图像。
由上文说明可知,图像传感器包括的像素点与图像包括的像素是两个不同的概念,其中,图像包括的像素指的是图像的最小组成单元,其一般由一个数字序列进行表示,通常情况下,可以将该数字序列称为像素的像素值。本申请实施例对“图像传感器包括的像素点”以及“图像包括的像素”两个概念均有所涉及,为了方便读者理解,在此进行简要的解释。
图6为一个实施例中像素点组上设置滤光片的示意图。像素点组Z包括按照两行两列的阵列排布方式进行排布的4个像素点D,其中,第一行第一列的像素点的颜色通道为绿色,也即是,第一行第一列的像素点上设置的滤光片为绿色滤光片;第一行第二列的像素点的颜色通道为红色,也即是,第一行第二列的像素点上设置的滤光片为红色滤光片;第二行第一列的像素点的颜色通道为蓝色,也即是,第二行第一列的像素点上设置的滤光片为蓝色滤光片;第二行第二列的像素点的颜色通道为绿色,也即是,第二行第二列的像素点上设置的滤光片为绿色滤光片。
图7为一个实施例中对焦方法的流程图。本实施例中的对焦方法,以运行于图5中的成像设备上为例进行描述。如图7所示,该对焦方法包括步骤702至步骤706。
步骤702,拍摄时通过图像传感器获取相位差值,该相位差值包括第一方向的相位差值和第二方向的相位差值,第一方向与第二方向成预设角度。
具体地,通过电子设备的成像设备拍摄图像时,获取相位差值,该相位差值包括第一方向的相位差值和第二方向的相位差值。第一方向和第二方向可成预设角度,该预设角度可为除0度、180度和360度外的任意角度。本实施例中,第一方向的相位差值是指水平方向上的相位差值。第二方向的相位差值是指竖直方向上的相位差值。
步骤704,根据该第一方向的相位差值和第二方向的相位差值确定离焦距离值。
相位差值与离焦距离值之间的对应关系可通过标定得到。
离焦距离值与相位差值之间的对应关系如下:
defocus=PD*slope(DCC),其中,DCC(Defocus Conversion Coefficient,离焦系数)由标定得到,defocus为离焦距离值,slope为斜率函数;PD为相位差值。
相位差值与离焦距离值的对应关系的标定过程包括:将摄像模组的有效对焦行程切分为10等分,即(近焦DAC-远焦DAC)/10,以此覆盖马达的对焦范围;在每个对焦DAC(DAC可为0至1023)位置进行对 焦,并记录当前对焦DAC位置的相位差;完成马达对焦行程后取一组10个的对焦DAC与获得的PD值进行做比;生成10个相近的比值K,将DAC与PD组成的二维数据进行拟合得到斜率为K的直线。
步骤706,根据该离焦距离值控制镜头移动以对焦。
具体地,当第一方向的相位差值和第二方向的相位差值都不为0时,可以求取第一方向的相位差值的置信度和第二方向的相位差值的置信度,选取置信度大的相位差值作为目标相位差值,然后根据确定的目标相位差值从相位差值与离焦距离值之间的映射关系得到对应的离焦距离值。
其中,置信度用于表示相位差计算结果的可信程度。本实施例中,以计算水平相位差为例,计算图像中某一行坐标x的相位差,取左图x-2,x-1,x,x+1,x+2共5个像素点的亮度值,右图上做移动,移动范围可为-10到+10。即:
对右图亮度值Rx-12,Rx-11,Rx-10,Rx-9,Rx-8和x-2,x-1,x,x+1,x+2做相似比较;
对右图亮度值Rx-11,Rx-10,Rx-9,Rx-8,Rx-7和x-2,x-1,x,x+1,x+2做相似比较;
……
对右图亮度值Rx-2,Rx-1,Rx,Rx+1,Rx+2和x-2,x-1,x,x+1,x+2做相似比较;
对右图亮度值Rx-1,Rx,Rx+1,Rx+2,Rx+3和x-2,x-1,x,x+1,x+2做相似比较;
……
对右图亮度值Rx+7,Rx+8,Rx+9,Rx+10,Rx+11和x-2,x-1,x,x+1,x+2做相似比较
对右图亮度值Rx+8,Rx+9,Rx+10,Rx+11,Rx+12和x-2,x-1,x,x+1,x+2做相似比较。
以右图五个像素点值为R
x-2,R
x-1,R
x,R
x+1,R
x+2,左图五个像素点值为x
-2,x
-1,x,x
+1,x
+2为例,相似度匹配程度可以为|R
x-2-x
-2|+|R
x-1-x
-1|+|R
x-x|+|R
x+1--x
+1|+|R
x+2-x
+2|。相似度匹配程度的值越小,相似度越高。相似度越高,可信度越高。相似的像素点值可作为相匹配的像素点得到相位差。而对于上图和下图,可取上图中的一列像素点的亮度值和下图中一列相同数量的像素点的亮度值作相似比较。上图和下图的可信度获取过程与左图和右图的过程类似,在此不再赘述。
当第一方向的相位差值的置信度大于第二方向的相位差值的置信度时,选取第一方向的相位差值,根据第一方向的相位差值得到对应的离焦距离值,并确定移动方向为水平方向。
当第一方向的相位差值的置信度小于第二方向的相位差值的置信度时,选取第二方向的相位差值,根据第二方向的相位差值得到对应的离焦距离值,并确定移动方向为竖直方向。
当第一方向的相位差值的置信度等于第二方向的相位差值的置信度时,可以根据第一方向的相位差值确定水平方向上的离焦距离值,以及根据第二方向的相位差值确定竖直方向上的离焦距离值,先按照水平方向上的离焦距离值移动,再按照竖直方向上的离焦距离值移动,或者先按照竖直方向上的离焦距离值移动,再按照水平方向上的离焦距离值移动。
对于存在水平纹理的场景,因水平方向上的PD像素对无法得到第一方向的相位差值,可比对竖直方向上的PD像素对,计算竖直方向上的第二方向的相位差值,根据第二方向的相位差值计算离焦距离值,再根据竖直方向上的离焦距离值控制镜头移动以实现对焦。
对于存在竖直纹理的场景,因竖直方向上的PD像素对无法得到第二方向的相位差值,可比对水平方向上的PD像素对,计算水平方向上的第一方向的相位差值,根据第一方向的相位差值计算离焦距离值,再根据水平方向上的离焦距离值控制镜头移动以实现对焦。
上述对焦方法,通过获取第一方向的相位差值和第二方向的相位差值,并根据第一方向的相位差值和第二方向的相位差值确定离焦距离值及移动方向,根据离焦距离值和移动方向控制镜头移动,实现了相位检测自动对焦,因可以输出第一方向的相位差值和第二方向的相位差值,针对存在水平纹理或竖直纹理的场景都可以有效的利用相位差值进行对焦,提高了对焦的准确度和稳定度。
通常,获取相位差值可以采用频率域算法和空间域算法。其中,频率域算法是利用傅立叶位移的特性计算,将采集的目标亮度图利用傅立叶转换从空间域转换到频率域,然后再计算phase correlation,当correlation算出最大值时(peak),表示其有最大位移,此时再做反傅立叶就可知在空间域最大位移是多少。空间域算法是指找出特征点,例如边缘特征,DoG(difference of Gaussian),Harris角点等特征,再利用这些特征点计算位移。
图8为一个实施例中获取相位差的流程图。如图8所示,该获取相位差,包括:
步骤802,根据每个所述像素点组包括的像素点的亮度值获取目标亮度图。
通常情况下,图像传感器的像素点的亮度值可以由该像素点包括的子像素点的亮度值来进行表征。成像设备可以根据每个像素点组包括的像素点中子像素点的亮度值来获取该目标亮度图。其中,子像素点的亮度值是指该子像素点对应的感光元件接收到的光信号的亮度值。
如上文所述,图像传感器包括的子像素点是一种能够将光信号转化为电信号的感光元件,因此,可以根据子像素点输出的电信号来获取该子像素点接收到的光信号的强度,根据子像素点接收到的光信号的强度即可得到该子像素点的亮度值。
本申请实施例中的目标亮度图用于反映图像传感器中子像素点的亮度值,该目标亮度图可以包括多个像素,其中,目标亮度图中的每个像素的像素值均是根据图像传感器中子像素点的亮度值得到的。
步骤804,对所述目标亮度图进行切分处理,得到第一切分亮度图和第二切分亮度图,并根据所述第一切分亮度图和所述第二切分亮度图中相互匹配的像素的位置差异,确定所述相互匹配的像素的相位差值。
在一个实施例中,成像设备可以沿列的方向(图像坐标系中的y轴方向)对该目标亮度图进行切分处理,在沿列的方向对目标亮度图进行切分处理的过程中,切分处理的每一分割线都与列的方向垂直。
在另一个实施例中,成像设备可以沿行的方向(图像坐标系中的x轴方向)对该目标亮度图进行切分处理,在沿行的方向对目标亮度图进行切分处理的过程中,切分处理的每一分割线都与行的方向垂直。
沿列的方向对目标亮度图进行切分处理后得到的第一切分亮度图和第二切分亮度图可以分别称为上图和下图。沿行的方向对目标亮度图进行切分处理后得到的第一切分亮度图和第二切分亮度图可以分别称为左图和右图。
其中,“相互匹配的像素”指的是由像素本身及其周围像素组成的像素矩阵相互相似。例如,第一切分亮度图中像素a和其周围的像素组成一个3行3列的像素矩阵,该像素矩阵的像素值为:
第二切分亮度图中像素b和其周围的像素也组成一个3行3列的像素矩阵,该像素矩阵的像素值为:
由上文可以看出,这两个矩阵是相似的,则可以认为像素a和像素b相互匹配。判断像素矩阵是否相似的方式很多,通常可对两个像素矩阵中的每个对应像素的像素值求差,再将求得的差值的绝对值进行相加,利用该相加的结果来判断像素矩阵是否相似,也即是,若该相加的结果小于预设的某一阈值,则认为像素矩阵相似,否则,则认为像素矩阵不相似。
例如,对于上述两个3行3列的像素矩阵而言,可以分别将1和2求差,将15和15求差,将70和70求差,……,再将求得的差的绝对值相加,得到相加结果为3,该相加结果3小于预设的阈值,则认为上述两个3行3列的像素矩阵相似。
另一种判断像素矩阵是否相似的方式是利用sobel卷积核计算方式或者高拉普拉斯计算方式等方式提取其边缘特征,通过边缘特征来判断像素矩阵是否相似。
在本申请实施例中,“相互匹配的像素的位置差异”指的是,相互匹配的像素中位于第一切分亮度图中的像素的位置和位于第二切分亮度图中的像素的位置的差异。如上述举例,相互匹配的像素a和像素b的位置差异指的是像素a在第一切分亮度图中的位置和像素b在第二切分亮度图中的位置的差异。
相互匹配的像素分别对应于从不同方向射入镜头的成像光线在图像传感器中所成的不同的像。例如,第一切分亮度图中的像素a与第二切分亮度图中的像素b相互匹配,其中,该像素a可以对应于图1中在A位置处所成的像,像素b可以对应于图1中在B位置处所成的像。
由于相互匹配的像素分别对应于从不同方向射入镜头的成像光线在图像传感器中所成的不同的像,因此,根据相互匹配的像素的位置差异,即可确定该相互匹配的像素的相位差。
步骤806,根据所述相互匹配的像素的相位差值确定第一方向的相位差值或第二方向的相位差值。
当第一切分亮度图包括的是偶数行的像素,第二切分亮度图包括的是奇数行的像素,第一切分亮度图中的像素a与第二切分亮度图中的像素b相互匹配,则根据相互匹配的像素a和像素b的相位差,可以确定第一方向的相位差值。
当第一切分亮度图包括的是偶数列的像素,第二切分亮度图包括的是奇数列的像素,第一切分亮度图中的像素a与第二切分亮度图中的像素b相互匹配,则根据相互匹配的像素a和像素b的相位差,可以确定第二方向的相位差值。
上述像素点组中的像素点的亮度值得到目标亮度图,将目标亮度图划分为两个切分亮度图后,通过像素匹配,可以快速的确定相互匹配的像素的相位差值,同时包含了丰富的相位差值,可以提高相位差值得精确度,提高对焦的准确度和稳定度。
在一个实施例中,每个所述像素点包括阵列排布的多个子像素点,所述根据每个所述像素点组包括的像素点的亮度值获取目标亮度图,包括:对于每个所述像素点组,根据所述像素点组中每个像素点的相同位置处的子像素点的亮度值,获取所述像素点组对应的子亮度图;根据每个所述像素点组对应的子亮度图生成该目标亮度图。
其中,每个像素点的相同位置处的子像素点指的是在各像素点中排布位置相同的子像素点。
图9为一个实施例中的像素点组的示意图,如图9所示,该像素点组包括按照两行两列的阵列排布方式进行排布的4个像素点,该4个像素点分别为D1像素点、D2像素点、D3像素点和D4像素点,其中,每个像素点包括按照两行两列的阵列排布方式进行排布的4个子像素点,其中,子像素点分别为d11、d12、d13、d14、d21、d22、d23、d24、d31、d32、d33、d34、d41、d42、d43和d44。
如图9所示,子像素点d11、d21、d31和d41在各像素点中的排布位置相同,均为第一行第一列,子像素点d12、d22、d32和d42在各像素点中的排布位置相同,均为第一行第二列,子像素点d13、d23、d33和d43在各像素点中的排布位置相同,均为第二行第一列,子像素点d14、d24、d34和d44在各像素点中的排布位置相同,均为第二行第二列。
在一个实施例中,根据所述像素点组中每个像素点的相同位置处的子像素点的亮度值,获取所述像素点组对应的子亮度图,包括可以包括步骤A1至A3。
步骤A1,成像设备从每个像素点中确定相同位置处的子像素点,得到多个子像素点集合。
其中,每个子像素点集合包括的子像素点在像素点中的位置均相同。
成像设备分别从D1像素点、D2像素点、D3像素点和D4像素点中确定相同位置处的子像素点,可以得到4个子像素点集合J1、J2、J3和J4,其中,子像素点集合J1包括子像素点d11、d21、d31和d41,其包括的子像素点在像素点中的位置均相同,为第一行第一列,子像素点集合J2包括子像素点d12、d22、d32和d42,其包括的子像素点在像素点中的位置均相同,为第一行第二列,子像素点集合J3包括子像素点d13、d23、d33和d43,其包括的子像素点在像素点中的位置均相同,为第二行第一列,子像素点集合J4包括子像素点d14、d24、d34和d44,其包括的子像素点在像素点中的位置均相同,为第二行第二列。
步骤A2,对于每个子像素点集合,成像设备根据该子像素点集合中每个子像素点的亮度值,获取该子像素点集合对应的亮度值。
可选的,在步骤A2中,成像设备可以确定子像素点集合中每个子像素点对应的颜色系数,其中,该颜色系数是根据子像素点对应的颜色通道确定的。
例如,子像素点d11属于D1像素点,该D1像素点包括的滤光片可以为绿色滤光片,也即是,该D1像素点的颜色通道为绿色,则其包括的子像素点d11的颜色通道也为绿色,成像设备可以根据子像素点d11的颜色通道(绿色)确定该子像素点d11对应的颜色系数。
在确定了子像素点集合中每个子像素点对应的颜色系数之后,成像设备可以将子像素点集合中每个子像素点对应的颜色系数与亮度值相乘,得到子像素点集合中每个子像素点的加权亮度值。
例如,成像设备可以将子像素点d11的亮度值与子像素点d11对应的颜色系数相乘,得到该子像素点d11的加权亮度值。
在得到子像素点集合中每个子像素点的加权亮度值之后,成像设备可以将子像素点集合中每个子像素点的加权亮度值相加,得到该子像素点集合对应的亮度值。
例如,对于子像素点集合J1,可以基于下述第一公式计算该子像素点集合J1对应的亮度值。
Y_TL=Y_21*C_R+(Y_11+Y_41)*C_G/2+Y_31*C_B。
其中,Y_TL为该子像素点集合J1对应的亮度值,Y_21为子像素点d21的亮度值,Y_11为子像素点d11的亮度值,Y_41为子像素点d41的亮度值,Y_31为子像素点d31的亮度值,C_R为子像素点d21对应的颜色系数,C_G/2为子像素点d11和d41对应的颜色系数,C_B为子像素点d31对应的颜色系数,其中,Y_21*C_R为子像素点d21的加权亮度值,Y_11*C_G/2为子像素点d11的加权亮度值,Y_41*C_G/2为子像素点d41的加权亮度值,Y_31*C_B为子像素点d31的加权亮度值。
对于子像素点集合J2,可以基于下述第二公式计算该子像素点集合J2对应的亮度值。
Y_TR=Y_22*C_R+(Y_12+Y_42)*C_G/2+Y_32*C_B。
其中,Y_TR为该子像素点集合J2对应的亮度值,Y_22为子像素点d22的亮度值,Y_12为子像素点d12的亮度值,Y_42为子像素点d42的亮度值,Y_32为子像素点d32的亮度值,C_R为子像素点d22对应的颜色系数,C_G/2为子像素点d12和d42对应的颜色系数,C_B为子像素点d32对应的颜色系数,其中,Y_22*C_R为子像素点d22的加权亮度值,Y_12*C_G/2为子像素点d12的加权亮度值,Y_42*C_G/2为子像素点d42的加权亮度值,Y_32*C_B为子像素点d32的加权亮度值。
对于子像素点集合J3,可以基于下述第三公式计算该子像素点集合J3对应的亮度值。
Y_BL=Y_23*C_R+(Y_13+Y_43)*C_G/2+Y_33*C_B。
其中,Y_BL为该子像素点集合J3对应的亮度值,Y_23为子像素点d23的亮度值,Y_13为子像素点d13的亮度值,Y_43为子像素点d43的亮度值,Y_33为子像素点d33的亮度值,C_R为子像素点d23对应的颜色系数,C_G/2为子像素点d13和d43对应的颜色系数,C_B为子像素点d33对应的颜色系数,其中,Y_23*C_R为子像素点d23的加权亮度值,Y_13*C_G/2为子像素点d13的加权亮度值,Y_43*C_G/2为子像素点d43的加权亮度值,Y_33*C_B为子像素点d33的加权亮度值。
对于子像素点集合J4,可以基于下述第四公式计算该子像素点集合J4对应的亮度值。
Y_BR=Y_24*C_R+(Y_14+Y_44)*C_G/2+Y_34*C_B。
其中,Y_BR为该子像素点集合J4对应的亮度值,Y_24为子像素点d24的亮度值,Y_14为子像素点d14的亮度值,Y_44为子像素点d44的亮度值,Y_34为子像素点d34的亮度值,C_R为子像素点d24对应的颜色系数,C_G/2为子像素点d14和d44对应的颜色系数,C_B为子像素点d34对应的颜色系数,其中,Y_24*C_R为子像素点d24的加权亮度值,Y_14*C_G/2为子像素点d14的加权亮度值,Y_44*C_G/2为子像素点d44的加权亮度值,Y_34*C_B为子像素点d34的加权亮度值。
步骤A3,成像设备根据每个子像素集合对应的亮度值生成子亮度图。
其中,子亮度图包括多个像素,该子亮度图中每个像素与一个子像素集合相对应,每个像素的像素值等于对应的子像素集合所对应的亮度值。
图10为一个实施例中子亮度图的示意图。如图10所示,该子亮度图包括4个像素,其中,第一行第一列的像素与子像素集合J1相对应,其像素值为Y_TL,第一行第二列的像素与子像素集合J2相对应,其像素值为Y_TR,第二行第一列的像素与子像素集合J3相对应,其像素值为Y_BL,第二行第二列的像素与子像素集合J4相对应,其像素值为Y_BR。
图11为一个实施例中获取目标亮度图的流程图。如图11所示,该获取目标亮度图的方式可以包括以下步骤:
步骤1102,从每个像素点组中确定目标像素点,得到多个目标像素点。
像素点组可以包括阵列排布的多个像素点,成像设备可以从每个像素点组包括的多个像素点中确定一个目标像素点,从而得到多个目标像素点。
可选的,成像设备可以从每个像素点组中确定颜色通道为绿色的像素点(也即是包括的滤光片为绿色滤光片的像素点),而后,将该颜色通道为绿色的像素点确定为目标像素点。
由于颜色通道为绿色的像素点感光性能较好,因此,将像素点组中颜色通道为绿色的像素点确定为目标像素点,在后续步骤中根据该目标像素点生成的目标亮度图质量较高。
步骤1104,根据每个目标像素点包括的子像素点的亮度值生成每个像素点组对应的子亮度图。
其中,每个像素点组对应的子亮度图包括多个像素,每个像素点组对应的子亮度图中每个像素与该像素点组中目标像素点包括的一个子像素点相对应,每个像素点组对应的子亮度图中每个像素的像素值为对 应的子像素点的亮度值。
图12为一个实施例中根据像素点组Z1中的目标像素点DM包括的子像素点的亮度值生成该像素点组Z1对应的子亮度图L的示意。
如图12所示,该子亮度图L包括4个像素,其中,每个像素与目标像素点DM包括的一个子像素点相对应,且,每个像素的像素值为对应的子像素点的亮度值,其中,该子亮度图L中第一行第一列的像素与目标像素点DM包括的第一行第一列的子像素点对应,该子亮度图L中第一行第一列的像素的像素值Gr_TL为目标像素点DM包括的第一行第一列的子像素点的亮度值Gr_TL,该子亮度图L中第一行第二列的像素与目标像素点DM包括的第一行第二列的子像素点对应,该子亮度图L中第一行第二列的像素的像素值Gr_TR为目标像素点DM包括的第一行第二列的子像素点的亮度值Gr_TR,该子亮度图L中第二行第一列的像素与目标像素点DM包括的第二行第一列的子像素点对应,该子亮度图L中第二行第一列的像素的像素值Gr_BL为目标像素点DM包括的第二行第一列的子像素点的亮度值Gr_BL,该子亮度图L中第二行第二列的像素与目标像素点DM包括的第二行第二列的子像素点对应,该子亮度图L中第二行第二列的像素的像素值Gr_BR为目标像素点DM包括的第二行第二列的子像素点的亮度值Gr_BR。
步骤1106,根据每个像素点组对应的子亮度图生成目标亮度图。
成像设备可以按照图像传感器中各个像素点组的阵列排布方式,对各个像素点组对应的子亮度图进行拼接,得到目标亮度图。
图13为另一个实施例中获取目标亮度图的流程图。如图13所示,该获取目标亮度图的方式可以包括以下步骤:
步骤1302,从每个像素点组中确定相同位置处的像素点,得到多个像素点集合。
其中,每个像素点集合包括的像素点在像素点组中的位置均相同。
如图14所示,成像设备分别从像素点组Z1、像素点组Z2、像素点组Z3和像素点组Z4中确定相同位置处的像素点,可以得到4个像素点集合P1、P2、P3和P4,其中,像素点集合P1包括像素点D11、D21、D31和D41,其包括的像素点在像素点组中的位置均相同,为第一行第一列,像素点集合P2包括像素点D12、D22、D32和D42,其包括的像素点在像素点组中的位置均相同,为第一行第二列,像素点集合P3包括子像素点D13、D23、D33和D43,其包括的像素点在像素点组中的位置均相同,为第二行第一列,像素点集合P4包括像素点D14、D24、D34和D44,其包括的像素点在像素点组中的位置均相同,为第二行第二列。
步骤1304,成像设备根据多个像素点集合中像素点的亮度值,生成与多个像素点集合一一对应的多个目标亮度图。
如上文所述,图像传感器的像素点的亮度值可以由该像素点包括的子像素点的亮度值来进行表征,因此,对于每个像素点集合,成像设备可以根据该像素点集合中每个像素点包括的每个子像素点的亮度值生成该像素点集合对应的目标亮度图。
其中,与某一像素点集合对应的目标亮度图包括多个像素,该目标亮度图中的每个像素与该像素点集合包括的像素点的一个子像素点相对应,该目标亮度图中的每个像素的像素值为对应的子像素点的亮度值。
在图11的获取目标亮度图的方式中,成像设备从每个像素点组中确定一个像素点(也即是目标像素点),并根据确定出的像素点生成目标亮度图,换句话说,在第二种获取目标亮度图的方式中,成像设备根据每个像素点组中的一个像素点生成了一幅目标亮度图。
而在图13的获取目标亮度图的方式中,成像设备根据每个像素点组中的一个像素点生成一幅目标亮度图,并根据每个像素点组中的另一个像素点生成另一幅目标亮度图,同时根据每个像素点组中的又一个像素点生成又一幅目标亮度图,以此类推。其中,在图13的获取目标亮度图的方式中,成像设备获取到的目标亮度图的个数与像素点组包括的像素点的个数相同。
在得到多个目标亮度图后,对于每个目标亮度图,摄像设备对该目标亮度图进行切分处理,并根据切分处理结果得到第一切分亮度图和第二切分亮度图,对于每个目标亮度图对应的第一切分亮度图和第二切分亮度图,根据第一切分亮度图和第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差。
对于每一个目标亮度图,成像设备都可以根据该目标亮度图对应的第一切分亮度图和第二切分亮度图 中相互匹配的像素的相位差,得到中间相位差图,接着,成像设备可以根据每个目标亮度图对应的中间相位差图得到目标相位差图。这样,获取到目标相位差图精度较高,在像素点组包括4个像素点的情况下,通过这种方式获取到的目标相位差图的精度是上述第二种目标亮度图的获取方式获取到的目标相位差图的精度的4倍。
下面,本申请实施例将对根据每个目标亮度图对应的中间相位差图得到目标相位差图的技术过程进行说明,该技术过程可以包括步骤B1至步骤B3。
步骤B1,成像设备从每个中间相位差图中确定相同位置处的像素,得到多个相位差像素集合。
其中,每个相位差像素集合包括的像素在中间相位差图中的位置均相同。
参照图15,成像设备分别从中间相位差图1、中间相位差图2、中间相位差图3和中间相位差图4中确定相同位置处的像素,可以得到4个相位差像素集合Y1、Y2、Y3和Y4,其中,相位差像素集合Y1包括中间相位差图1中的像素PD_Gr_1、中间相位差图2中的像素PD_R_1、中间相位差图3中的像素PD_B_1和中间相位差图4中的像素PD_Gb_1,相位差像素集合Y2包括中间相位差图1中的像素PD_Gr_2、中间相位差图2中的像素PD_R_2、中间相位差图3中的像素PD_B_2和中间相位差图4中的像素PD_Gb_2,相位差像素集合Y3包括中间相位差图1中的像素PD_Gr_3、中间相位差图2中的像素PD_R_3、中间相位差图3中的像素PD_B_3和中间相位差图4中的像素PD_Gb_3,相位差像素集合Y4包括中间相位差图1中的像素PD_Gr_4、中间相位差图2中的像素PD_R_4、中间相位差图3中的像素PD_B_4和中间相位差图4中的像素PD_Gb_4。
步骤B2,对于每个相位差像素集合,成像设备将相位差像素集合中的像素进行拼接,得到与相位差像素集合对应的子相位差图。
其中,该子相位差图包括多个像素,每个像素与相位差像素集合中的一个像素相对应,每个像素的像素值等于与其对应的像素的像素值。
步骤B3,成像设备将得到的多个子相位差图进行拼接得到目标相位差图。
参考图16,图16为目标相位差图的示意图,其中,该目标相位差图包括子相位差图1、子相位差图2、子相位差图3和子相位差图4,其中,子相位差图1与相位差像素集合Y1相对应,子相位差图2与相位差像素集合Y2相对应,子相位差图3与相位差像素集合Y3相对应,子相位差图4与相位差像素集合Y4相对应。
图17为一个实施例中对目标亮度图进行切分处理,得到第一切分亮度图和第二切分亮度图的方式的流程图,可以应用于图3所示的成像设备中,如图17所示,该方式可以包括以下步骤:
步骤1702,对目标亮度图进行切分处理,得到多个亮度图区域。
其中,每个亮度图区域包括目标亮度图中的一行像素,或者,每个亮度图区域包括目标亮度图中的一列像素。
可选地,成像设备可以沿行的方向对目标亮度图进行逐列切分,得到目标亮度图的多个像素列(也即是上文所述的亮度图区域)。
可选地,成像设备可以沿列的方向对目标亮度图进行逐行切分,得到目标亮度图的多个像素行(也即是上文所述的亮度图区域)。
步骤1704,从多个亮度图区域中获取多个第一亮度图区域和多个第二亮度图区域。
其中,第一亮度图区域包括目标亮度图中偶数行的像素,或者,第一亮度图区域包括目标亮度图中偶数列的像素。
第二亮度图区域包括目标亮度图中奇数行的像素,或者,第二亮度图区域包括目标亮度图中奇数列的像素。
换句话说,在对目标亮度图进行逐列切分的情况下,成像设备可以将偶数列确定为第一亮度图区域,将奇数列确定为第二亮度图区域。
在对目标亮度图进行逐行切分的情况下,成像设备可以将偶数行确定为第一亮度图区域,将奇数行确定为第二亮度图区域。
步骤1706,利用多个第一亮度图区域组成第一切分亮度图,利用多个第二亮度图区域组成第二切分亮度图。
参考图18,设目标亮度图包括6行6列像素,则在对目标亮度图进行逐列切分的情况下,成像设备可以将目标亮度图的第1列像素、第3列像素和第5列像素确定为第二亮度图区域,可以将目标亮度图的第2列像素、第4列像素和第6列像素确定为第一亮度图区域,而后,成像设备可以将第一亮度图区域进行拼接,得到第一切分亮度图T1,该第一切分亮度图T1包括目标亮度图的第2列像素、第4列像素和第6列像素,成像设备可以将第二亮度图区域进行拼接,得到第二切分亮度图T2,该第二切分亮度图T2包括目标亮度图的第1列像素、第3列像素和第5列像素。
参考图19,设目标亮度图包括6行6列像素,则在对目标亮度图进行逐行切分的情况下,成像设备可以将目标亮度图的第1行像素、第3行像素和第5行像素确定为第二亮度图区域,可以将目标亮度图的第2行像素、第4行像素和第6行像素确定为第一亮度图区域,而后,成像设备可以将第一亮度图区域进行拼接,得到第一切分亮度图T3,该第一切分亮度图T3包括目标亮度图的第2行像素、第4行像素和第6行像素,成像设备可以将第二亮度图区域进行拼接,得到第二切分亮度图T4,该第二切分亮度图T4包括目标亮度图的第1行像素、第3行像素和第5行像素。
参考图20,提供了一种根据第一切分亮度图和第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差的方式,可以应用于图3所示的成像设备中,如图20所示,该方式可以包括以下步骤:
步骤2002,当亮度图区域包括目标亮度图中的一行像素时,在第一切分亮度图包括的每行像素中确定第一邻近像素集合。
其中,第一邻近像素集合包括的像素与图像传感器中的同一像素点组对应。
请参考上述图10所示的子亮度图,当亮度图区域包括目标亮度图中的一行像素时,也即是,在成像设备沿列的方向对目标亮度图进行逐行切分的情况下,在切分之后,由于该子亮度图中第一行的两个像素位于目标亮度图的同一行像素中,因此,该子亮度图中第一行的两个像素会位于同一亮度图区域中,并会位于同一切分亮度图中,同理地,该子亮度图中第二行的两个像素也会位于同一亮度区域中,并会位于另一切分亮度图中,假设该子亮度图中的第一行位于目标亮度图的偶数像素行中,则该子亮度图中第一行的两个像素位于第一切分亮度图中,该子亮度图中第二行的两个像素位于第二切分亮度图中。
成像设备可以将该子亮度图中第一行的两个像素确定为第一邻近像素集合,这是因为该子亮度图中第一行的两个像素与图像传感器中的同一像素点组(图8所示的像素点组)对应。
步骤2004,对于每个第一邻近像素集合,成像设备在第二切分亮度图中搜索与该第一邻近像素集合对应的第一匹配像素集合。
对于每个第一邻近像素集合,成像设备可以在第一切分亮度图中获取该第一邻近像素集合周围的多个像素,并由该第一邻近像素集合以及该第一邻近像素集合周围的多个像素组成搜索像素矩阵,例如,该搜索像素矩阵可以包括3行3列共9个像素,接着,成像设备可以在第二切分亮度图中搜索与该搜索像素矩阵相似的像素矩阵。至于如何判断像素矩阵是否相似,上文中已经进行了说明,本申请实施例在此就不赘述了。
在第二切分亮度图中搜索到与搜索像素矩阵相似的像素矩阵之后,成像设备可以从该搜索到的像素矩阵中提取出第一匹配像素集合。
通过搜索得到的第一匹配像素集合中的像素与第一邻近像素集合中的像素分别对应于从不同方向射入镜头的成像光线在图像传感器中所成的不同的像。
步骤2006,根据每个第一邻近像素集合与每个第一匹配像素集合的位置差异,确定相互对应的第一邻近像素集合和第一匹配像素集合的相位差,得到第二方向的相位差值。
第一邻近像素集合与第一匹配像素集合的位置差异指的是:第一邻近像素集合在第一切分亮度图中的位置与第一匹配像素集合在第二切分亮度图中的位置的差异。
当得到的第一切分亮度图和第二切分亮度图可以被分别称为上图和下图,通过上图和下图获取到的相位差可以反映物体在垂直方向上的成像位置差异。
请参考21,其示出了一种根据第一切分亮度图和第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差的方式,可以应用于图3所示的成像设备中,如图21所示,该方式可以包括以下步骤:
步骤2102,当亮度图区域包括目标亮度图中的一列像素时,在第一切分亮度图包括的每列像素中确定第二邻近像素集合,其中,该第二邻近像素集合包括的像素与同一像素点组对应。
步骤2104,对于每个第二邻近像素集合,在第二切分亮度图中搜索与该第二邻近像素集合对应的第二匹配像素集合。
步骤2106,根据每个第二邻近像素集合与每个第二匹配像素集合的位置差异,确定相互对应的第二邻近像素集合和第二匹配像素集合的相位差,得到第一方向的相位差值。
步骤2102至步骤2104的技术过程与步骤2002至步骤2006的技术过程同理,本申请实施例在此不再赘述。
当亮度图区域包括目标亮度图中的一列像素时,得到的第一切分亮度图和第二切分亮度图可以被分别称为左图和右图,通过左图和右图获取到的相位差可以反映物体在水平方向上的成像位置差异。
由于当亮度图区域包括目标亮度图中的一列像素时,获取到的相位差可以反映物体在水平方向上的成像位置差异,当亮度图区域包括目标亮度图中的一行像素时,获取到的相位差可以反映物体在垂直方向上的成像位置差异,因此,按照本申请实施例获取到的相位差既可以反映物体在竖直方向上的成像位置差异又可以反映物体在水平方向上的成像位置差异,因此其精度较高。
在一个实施例中,上述对焦方法还可以包括:根据离焦距离值生成深度值。离焦距离值可以计算合焦状态时的像距,根据像距以及焦距可以得到物距,该物距即为深度值。
应该理解的是,虽然图7、图11、图13、图16、图17至图21的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图X中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
图22为一个实施例的对焦装置的结构框图。如图22所示,该对焦装置应用于电子设备,所述电子设备包括图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数,包括相位差获取模块2210、处理模块2212和控制模块2214。
相位差获取模块2210用于通过所述图像传感器获取相位差值,所述相位差值包括第一方向的相位差值和第二方向的相位差值,所述第一方向与所述第二方向成预设角度。
处理模块2212用于根据所述第一方向的相位差值和第二方向的相位差值确定离焦距离值。
控制模块2214用于根据所述离焦距离值控制镜头移动以对焦。
上述对焦装置,通过获取第一方向的相位差值和第二方向的相位差值,并根据第一方向的相位差值和第二方向的相位差值确定离焦距离值及移动方向,根据离焦距离值和移动方向控制镜头移动,实现了相位检测自动对焦,因可以输出第一方向的相位差值和第二方向的相位差值,针对存在水平纹理或竖直纹理的场景都可以有效的利用相位差值进行对焦,提高了对焦的准确度和稳定度。
在一个实施例中,处理模块2212还用于获取第一方向的相位差值的置信度和第二方向的相位差值的置信度;选取第一方向的相位差值和第二方向的相位差值中置信度大的相位差值作为目标相位差值,根据所述目标相位差值从相位差值与离焦距离值的对应关系中确定对应的离焦距离值。
在一个实施例中,相位差获取模块2210包括亮度确定单元和相位差确定单元。
亮度确定单元,用于根据每个所述像素点组包括的像素点的亮度值获取目标亮度图。
相位差确定单元,用于对所述目标亮度图进行切分处理,得到第一切分亮度图和第二切分亮度图,并根据所述第一切分亮度图和所述第二切分亮度图中相互匹配的像素的位置差异,确定所述相互匹配的像素的相位差值;根据所述相互匹配的像素的相位差值确定第一方向的相位差值或第二方向的相位差值。
在一个实施例中,亮度确定单元还用于对于每个所述像素点组,根据所述像素点组中每个像素点的相同位置处的子像素点的亮度值,获取所述像素点组对应的子亮度图;根据每个所述像素点组对应的子亮度图生成所述目标亮度图。
在一个实施例中,亮度确定单元还用于从每个所述像素点中确定相同位置处的子像素点,得到多个子像素点集合,其中,每个所述子像素点集合包括的子像素点在像素点中的位置均相同;对于每个所述子像素点集合,根据所述子像素点集合中每个子像素点的亮度值,获取所述子像素点集合对应的亮度值;根据每个所述子像素集合对应的亮度值生成所述子亮度图。
在一个实施例中,亮度确定单元还用于确定所述子像素点集合中每个子像素点对应的颜色系数,所述颜色系数是根据子像素点对应的颜色通道确定的;将所述子像素点集合中每个子像素点对应的颜色系数与亮度值相乘,得到所述子像素点集合中每个子像素点的加权亮度;将所述子像素点集合中每个子像素点的加权亮度相加,得到所述子像素点集合对应的亮度值。
在一个实施例中,每个所述像素点包括阵列排布的多个子像素点;
亮度确定单元还用于从每个所述像素点组中确定目标像素点,得到多个所述目标像素点;根据每个所述目标像素点包括的子像素点的亮度值生成每个所述像素点组对应的子亮度图;根据每个所述像素点组对应的子亮度图生成所述目标亮度图。
在一个实施例中,亮度确定单元还用于从每个所述像素点组中确定颜色通道为绿色的像素点;将所述颜色通道为绿色的像素点确定为所述目标像素点。
在一个实施例中,亮度确定单元还用于从每个所述像素点组中确定相同位置处的像素点,得到多个像素点集合,其中,每个所述像素点集合包括的像素点在像素点组中的位置均相同;根据所述多个像素点集合中像素点的亮度值,生成与所述多个像素点集合一一对应的多个所述目标亮度图;
相位差确定单元还用于对于每个所述目标亮度图,根据所述相互匹配的像素的相位差生成与所述目标亮度图对应的中间相位差图;根据每个所述目标亮度图对应的中间相位差图,生成所述第一方向的相位差值和所述第二方向的相位差值。
在一个实施例中,相位差确定单元还用于从每个所述中间相位差图中确定相同位置处的像素,得到多个相位差像素集合,其中,每个所述相位差像素集合包括的像素在中间相位差图中的位置均相同;
对于每个所述相位差像素集合,将所述相位差像素集合中的像素进行拼接,得到与所述相位差像素集合对应的子相位差图;
将得到的多个所述子相位差图进行拼接得到目标相位差图,所述目标相位差图中包括第一方向的相位差值和第二方向的相位差值。
在一个实施例中,相位差确定单元还用于对所述目标亮度图进行切分处理,得到多个亮度图区域,每个所述亮度图区域包括所述目标亮度图中的一行像素,或者,每个所述亮度图区域包括所述目标亮度图中的一列像素;
从所述多个亮度图区域中获取多个第一亮度图区域和多个第二亮度图区域,所述第一亮度图区域包括所述目标亮度图中偶数行的像素,或者,所述第一亮度图区域包括所述目标亮度图中偶数列的像素,所述第二亮度图区域包括所述目标亮度图中奇数行的像素,或者,所述第二亮度图区域包括所述目标亮度图中奇数列的像素;
利用所述多个第一亮度图区域组成所述第一切分亮度图,利用所述多个第二亮度图区域组成所述第二切分亮度图。
在一个实施例中,相位差确定单元还用于当所述亮度图区域包括所述目标亮度图中的一行像素时,在所述第一切分亮度图包括的每行像素中确定第一邻近像素集合,所述第一邻近像素集合包括的像素与同一像素点组对应;
对于每个所述第一邻近像素集合,在所述第二切分亮度图中搜索与所述第一邻近像素集合对应的第一匹配像素集合;
根据每个所述第一邻近像素集合与每个所述第一匹配像素集合的位置差异,确定相互对应的所述第一邻近像素集合和所述第一匹配像素集合的相位差,得到第二方向的相位差值。
在一个实施例中,相位差确定单元还用于当所述亮度图区域包括所述目标亮度图中的一列像素时,在所述第一切分亮度图包括的每列像素中确定第二邻近像素集合,所述第二邻近像素集合包括的像素与同一像素点组对应;
对于每个所述第二邻近像素集合,在所述第二切分亮度图中搜索与所述第二邻近像素集合对应的第二 匹配像素集合;
根据每个所述第二邻近像素集合与每个所述第二匹配像素集合的位置差异,确定相互对应的所述第二邻近像素集合和所述第二匹配像素集合的相位差,得到第一方向的相位差值。
上述对焦装置中各个模块的划分仅用于举例说明,在其他实施例中,可将对焦装置按照需要划分为不同的模块,以完成上述对焦装置的全部或部分功能。
图23为一个实施例中电子设备的内部结构示意图。如图23所示,该电子设备包括通过系统总线连接的处理器和存储器。其中,该处理器用于提供计算和控制能力,支撑整个电子设备的运行。存储器可包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种对焦方法。内存储器为非易失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。该电子设备可以是手机、平板电脑或者个人数字助理或穿戴式设备等。
本申请实施例中提供的对焦装置中的各个模块的实现可为计算机程序的形式。该计算机程序可在终端或服务器上运行。该计算机程序构成的程序模块可存储在终端或服务器的存储器上。该计算机程序被处理器执行时,实现本申请实施例中所描述方法的步骤。
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行对焦方法的步骤。
一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行对焦方法。
本申请实施例所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。合适的非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。
Claims (20)
- 一种对焦方法,应用于电子设备,所述电子设备包括图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数;所述方法包括:拍摄时通过所述图像传感器获取相位差值,所述相位差值包括第一方向的相位差值和第二方向的相位差值;所述第一方向与所述第二方向成预设角度;根据所述第一方向的相位差值和第二方向的相位差值确定离焦距离值;及根据所述离焦距离值控制镜头移动以对焦。
- 根据权利要求1所述的方法,其特征在于,所述根据所述第一方向的相位差值和第二方向的相位差值确定离焦距离值,包括:获取第一方向的相位差值的第一置信度和第二方向的相位差值的第二置信度;选取第一置信度和第二置信度中较大的相位差值作为目标相位差值;及根据所述目标相位差值,从相位差值与离焦距离值的对应关系中确定对应的离焦距离值。
- 根据权利要求1所述的方法,其特征在于,所述获取相位差值,包括:根据每个所述像素点组包括的像素点的亮度值获取目标亮度图;对所述目标亮度图进行切分处理,得到第一切分亮度图和第二切分亮度图,并根据所述第一切分亮度图和所述第二切分亮度图中相互匹配的像素的位置差异,确定所述相互匹配的像素的相位差值;及根据所述相互匹配的像素的相位差值确定第一方向的相位差值或第二方向的相位差值。
- 根据权利要求3所述的方法,其特征在于,所述对所述目标亮度图进行切分处理,得到第一切分亮度图和第二切分亮度图,包括:对所述目标亮度图进行切分处理,得到多个亮度图区域,每个所述亮度图区域包括所述目标亮度图中的一行像素,或者,每个所述亮度图区域包括所述目标亮度图中的一列像素;从所述多个亮度图区域中获取多个第一亮度图区域和多个第二亮度图区域,所述第一亮度图区域包括所述目标亮度图中偶数行的像素,或者,所述第一亮度图区域包括所述目标亮度图中偶数列的像素,所述第二亮度图区域包括所述目标亮度图中奇数行的像素,或者,所述第二亮度图区域包括所述目标亮度图中奇数列的像素;及利用所述多个第一亮度图区域组成所述第一切分亮度图,利用所述多个第二亮度图区域组成所述第二切分亮度图。
- 根据权利要求4所述的方法,其特征在于,所述根据所述第一切分亮度图和所述第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差,包括:当所述亮度图区域包括所述目标亮度图中的一行像素时,在所述第一切分亮度图包括的每行像素中确定第一邻近像素集合,所述第一邻近像素集合包括的像素与同一像素点组对应;对于每个所述第一邻近像素集合,在所述第二切分亮度图中搜索与所述第一邻近像素集合对应的第一匹配像素集合;及根据每个所述第一邻近像素集合与每个所述第一匹配像素集合的位置差异,确定相互对应的所述第一邻近像素集合和所述第一匹配像素集合的相位差,得到第二方向的相位差值。
- 根据权利要求4所述的方法,其特征在于,所述根据所述第一切分亮度图和所述第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差,包括:当所述亮度图区域包括所述目标亮度图中的一列像素时,在所述第一切分亮度图包括的每列像素中确定第二邻近像素集合,所述第二邻近像素集合包括的像素与同一像素点组对应;对于每个所述第二邻近像素集合,在所述第二切分亮度图中搜索与所述第二邻近像素集合对应的第二匹配像素集合;及根据每个所述第二邻近像素集合与每个所述第二匹配像素集合的位置差异,确定相互对应的所述第二邻近像素集合和所述第二匹配像素集合的相位差,得到第一方向的相位差值。
- 根据权利要求3所述的方法,其特征在于,每个所述像素点包括阵列排布的多个子像素点,所述根据每个所述像素点组包括的像素点的亮度值获取目标亮度图,包括:对于每个所述像素点组,根据所述像素点组中每个像素点的相同位置处的子像素点的亮度值,获取所述像素点组对应的子亮度图;及根据每个所述像素点组对应的子亮度图生成所述目标亮度图。
- 根据权利要求7所述的方法,其特征在于,所述根据所述像素点组中每个像素点的相同位置处的子像素点的亮度值,获取所述像素点组对应的子亮度图,包括:从每个所述像素点中确定相同位置处的子像素点,得到多个子像素点集合,其中,每个所述子像素点集合包括的子像素点在像素点中的位置均相同;对于每个所述子像素点集合,根据所述子像素点集合中每个子像素点的亮度值,获取所述子像素点集合对应的亮度值;及根据每个所述子像素集合对应的亮度值生成所述子亮度图。
- 根据权利要求8所述的方法,其特征在于,所述根据所述子像素点集合中每个子像素点的亮度值,获取所述子像素点集合对应的亮度值,包括:确定所述子像素点集合中每个子像素点对应的颜色系数,所述颜色系数是根据子像素点对应的颜色通道确定的;将所述子像素点集合中每个子像素点对应的颜色系数与亮度值相乘,得到所述子像素点集合中每个子像素点的加权亮度;及将所述子像素点集合中每个子像素点的加权亮度相加,得到所述子像素点集合对应的亮度值。
- 根据权利要求3所述的方法,其特征在于,每个所述像素点包括阵列排布的多个子像素点;所述根据每个所述像素点组包括的像素点的亮度值获取目标亮度图,包括:从每个所述像素点组中确定目标像素点,得到多个所述目标像素点;根据每个所述目标像素点包括的子像素点的亮度值生成每个所述像素点组对应的子亮度图;及根据每个所述像素点组对应的子亮度图生成所述目标亮度图。
- 根据权利要求10所述的方法,其特征在于,所述从每个所述像素点组中确定目标像素点,包括:从每个所述像素点组中确定颜色通道为绿色的像素点;将所述颜色通道为绿色的像素点确定为所述目标像素点。
- 根据权利要求3所述的方法,其特征在于,所述根据每个所述像素点组包括的像素点的亮度值获取目标亮度图,包括:从每个所述像素点组中确定相同位置处的像素点,得到多个像素点集合,其中,每个所述像素点集合包括的像素点在像素点组中的位置均相同;根据所述多个像素点集合中像素点的亮度值,生成与所述多个像素点集合一一对应的多个所述目标 亮度图;所述根据所述相互匹配的像素的相位差生成第一方向的相位差值和第二方向的相位差值,包括:对于每个所述目标亮度图,根据所述相互匹配的像素的相位差生成与所述目标亮度图对应的中间相位差图;根据每个所述目标亮度图对应的中间相位差图,生成所述第一方向的相位差值和所述第二方向的相位差值。
- 根据权利要求12所述的方法,其特征在于,所述根据每个所述目标亮度图对应的中间相位差图,生成所述第一方向的相位差值和所述第二方向的相位差值,包括:从每个所述中间相位差图中确定相同位置处的像素,得到多个相位差像素集合,其中,每个所述相位差像素集合包括的像素在中间相位差图中的位置均相同;对于每个所述相位差像素集合,将所述相位差像素集合中的像素进行拼接,得到与所述相位差像素集合对应的子相位差图;及将得到的多个所述子相位差图进行拼接得到目标相位差图,所述目标相位差图中包括第一方向的相位差值和第二方向的相位差值。
- 一种成像组件,其特征在于,包括:图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元;其中,M和N均为大于或等于2的自然数;所述感光单元,被配置成通过光电转换生成与接收的光强度相一致的像素信号。
- 根据权利要求14所述的成像组件,其特征在于,所述感光单元包括阵列排布的感光元件,所述感光元件为光电二极管。
- 一种成像设备,包括透镜、滤光片,其特征在于,还包括成像组件,所述透镜、滤光片和成像组件依次位于入射光路上;所述成像组件包括图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元;其中,M和N均为大于或等于2的自然数;所述感光单元,被配置成通过光电转换生成与接收的光强度相一致的像素信号。
- 根据权利要求16所述的成像设备,其特征在于,所述感光单元包括阵列排布的感光元件,所述感光元件为光电二极管。
- 一种对焦装置,其特征在于,应用于电子设备,所述电子设备包括图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数,所述装置包括:相位差获取模块,用于通过所述图像传感器获取相位差值,所述相位差值包括第一方向的相位差值和第二方向的相位差值;所述第一方向与所述第二方向成预设角度;处理模块,用于根据所述第一方向的相位差值和第二方向的相位差值确定离焦距离值;控制模块,用于根据所述离焦距离值控制镜头移动以对焦。
- 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,实现如权利要求1至13中任一项所述的方法的步骤。
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时,实现如权利要求1至13中任一项所述的方法的步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911102725.0A CN112866511B (zh) | 2019-11-12 | 2019-11-12 | 成像组件、对焦方法和装置、电子设备 |
CN201911102725.0 | 2019-11-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021093312A1 true WO2021093312A1 (zh) | 2021-05-20 |
Family
ID=75911829
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/093662 WO2021093312A1 (zh) | 2019-11-12 | 2020-06-01 | 成像组件、对焦方法和装置、电子设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112866511B (zh) |
WO (1) | WO2021093312A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115734074A (zh) * | 2021-08-27 | 2023-03-03 | 豪威科技股份有限公司 | 图像聚焦方法和相关联的图像传感器 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20230006343A (ko) | 2021-07-02 | 2023-01-10 | 삼성전자주식회사 | 전자 장치 및 그 제어 방법 |
CN115314635B (zh) * | 2022-08-03 | 2024-03-26 | Oppo广东移动通信有限公司 | 用于离焦量确定的模型训练方法及装置 |
CN117135449A (zh) * | 2023-01-13 | 2023-11-28 | 荣耀终端有限公司 | 一种自动对焦方法及电子设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104064577A (zh) * | 2014-07-16 | 2014-09-24 | 上海集成电路研发中心有限公司 | 自动对焦的图像传感器 |
CN104838313A (zh) * | 2012-09-19 | 2015-08-12 | 富士胶片株式会社 | 摄像装置及其控制方法 |
CN104954645A (zh) * | 2014-03-24 | 2015-09-30 | 佳能株式会社 | 摄像元件、摄像设备和图像处理方法 |
JP2016080742A (ja) * | 2014-10-10 | 2016-05-16 | キヤノン株式会社 | 撮像装置 |
CN106210548A (zh) * | 2016-09-05 | 2016-12-07 | 信利光电股份有限公司 | 一种快速对焦的方法及装置 |
CN106921823A (zh) * | 2017-04-28 | 2017-07-04 | 广东欧珀移动通信有限公司 | 图像传感器、摄像模组和终端设备 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013141108A (ja) * | 2011-12-29 | 2013-07-18 | Nikon Corp | 交換レンズ、カメラボディ |
CN108337424B (zh) * | 2017-01-17 | 2021-04-16 | 中兴通讯股份有限公司 | 一种相位对焦方法及其装置 |
KR102545173B1 (ko) * | 2018-03-09 | 2023-06-19 | 삼성전자주식회사 | 위상 검출 픽셀들을 포함하는 이미지 센서 및 이미지 촬상 장치 |
CN109905600A (zh) * | 2019-03-21 | 2019-06-18 | 上海创功通讯技术有限公司 | 成像方法、成像装置及计算机可读存储介质 |
-
2019
- 2019-11-12 CN CN201911102725.0A patent/CN112866511B/zh active Active
-
2020
- 2020-06-01 WO PCT/CN2020/093662 patent/WO2021093312A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104838313A (zh) * | 2012-09-19 | 2015-08-12 | 富士胶片株式会社 | 摄像装置及其控制方法 |
CN104954645A (zh) * | 2014-03-24 | 2015-09-30 | 佳能株式会社 | 摄像元件、摄像设备和图像处理方法 |
CN104064577A (zh) * | 2014-07-16 | 2014-09-24 | 上海集成电路研发中心有限公司 | 自动对焦的图像传感器 |
JP2016080742A (ja) * | 2014-10-10 | 2016-05-16 | キヤノン株式会社 | 撮像装置 |
CN106210548A (zh) * | 2016-09-05 | 2016-12-07 | 信利光电股份有限公司 | 一种快速对焦的方法及装置 |
CN106921823A (zh) * | 2017-04-28 | 2017-07-04 | 广东欧珀移动通信有限公司 | 图像传感器、摄像模组和终端设备 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115734074A (zh) * | 2021-08-27 | 2023-03-03 | 豪威科技股份有限公司 | 图像聚焦方法和相关联的图像传感器 |
CN115734074B (zh) * | 2021-08-27 | 2024-01-12 | 豪威科技股份有限公司 | 图像聚焦方法和相关联的图像传感器 |
Also Published As
Publication number | Publication date |
---|---|
CN112866511B (zh) | 2022-06-14 |
CN112866511A (zh) | 2021-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021093312A1 (zh) | 成像组件、对焦方法和装置、电子设备 | |
US20230328374A1 (en) | Thin multi-aperture imaging system with auto-focus and methods for using same | |
US10397465B2 (en) | Extended or full-density phase-detection autofocus control | |
US9338380B2 (en) | Image processing methods for image sensors with phase detection pixels | |
US20150381951A1 (en) | Pixel arrangements for image sensors with phase detection pixels | |
WO2021093635A1 (zh) | 图像处理方法和装置、电子设备、计算机可读存储介质 | |
WO2021093637A1 (zh) | 追焦方法和装置、电子设备、计算机可读存储介质 | |
WO2023016144A1 (zh) | 对焦控制方法、装置、成像设备、电子设备和计算机可读存储介质 | |
CN112866675B (zh) | 深度图生成方法和装置、电子设备、计算机可读存储介质 | |
CN112866510B (zh) | 对焦方法和装置、电子设备、计算机可读存储介质 | |
WO2021093502A1 (zh) | 相位差的获取方法和装置、电子设备 | |
CN112866655B (zh) | 图像处理方法和装置、电子设备、计算机可读存储介质 | |
WO2021093528A1 (zh) | 对焦方法和装置、电子设备、计算机可读存储介质 | |
CN112866545B (zh) | 对焦控制方法和装置、电子设备、计算机可读存储介质 | |
CN112866552B (zh) | 对焦方法和装置、电子设备、计算机可读存储介质 | |
CN112862880B (zh) | 深度信息获取方法、装置、电子设备和存储介质 | |
KR20240045876A (ko) | 오토 포커싱을 위한 이미징 장치 및 방법 | |
CN112866547B (zh) | 对焦方法和装置、电子设备、计算机可读存储介质 | |
CN112866544B (zh) | 相位差的获取方法、装置、设备及存储介质 | |
CN112866543B (zh) | 对焦控制方法和装置、电子设备、计算机可读存储介质 | |
WO2021093537A1 (zh) | 相位差获取方法和装置、电子设备、计算机可读存储介质 | |
US20240127407A1 (en) | Image sensor apparatus for capturing depth information | |
CN112866551B (zh) | 对焦方法和装置、电子设备、计算机可读存储介质 | |
CN112866674A (zh) | 深度图获取方法、装置、电子设备和计算机可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20888563 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20888563 Country of ref document: EP Kind code of ref document: A1 |