WO2021093502A1 - Phase difference obtaining method and apparatus, and electronic device - Google Patents

Phase difference obtaining method and apparatus, and electronic device Download PDF

Info

Publication number
WO2021093502A1
WO2021093502A1 PCT/CN2020/120847 CN2020120847W WO2021093502A1 WO 2021093502 A1 WO2021093502 A1 WO 2021093502A1 CN 2020120847 W CN2020120847 W CN 2020120847W WO 2021093502 A1 WO2021093502 A1 WO 2021093502A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
phase difference
sub
map
brightness
Prior art date
Application number
PCT/CN2020/120847
Other languages
French (fr)
Chinese (zh)
Inventor
贾玉虎
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021093502A1 publication Critical patent/WO2021093502A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Definitions

  • This application relates to the field of imaging, in particular to a method and device for obtaining phase difference, electronic equipment, and computer-readable storage media.
  • phase detection auto focus English: phase detection auto focus; referred to as PDAF.
  • the traditional phase detection autofocus is to set phase detection pixel points in pairs in the pixel points included in the image sensor, where one phase detection pixel point of each phase detection pixel point pair is shielded on the left side, and the other phase detection pixel point The pixel points are shielded on the right side, so that the imaging beam directed at each phase detection pixel point pair can be separated into two parts, the left and right parts of the imaging beam, the phase difference can be obtained by comparing the images formed by the left and right parts of the imaging beam. After the phase difference is obtained, focusing can be performed according to the phase difference, where the phase difference refers to the difference in the imaging position of the imaging light incident from different directions.
  • the above method of setting phase detection pixels in the image sensor does not have high accuracy in obtaining the phase difference.
  • the embodiments of the present application provide a method and device for acquiring a phase difference, electronic equipment, and a computer-readable storage medium, which can improve the accuracy of acquiring the phase difference.
  • a method for obtaining a phase difference is applied to an electronic device.
  • the electronic device includes an image sensor.
  • the image sensor includes a plurality of pixel point groups arranged in an array, and each of the pixel point groups includes M* arranged in an array. N pixels; each pixel corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2; the method includes:
  • phase difference value a phase difference value in a first direction or a phase difference value in a second direction; the first direction and the second direction Into a preset angle.
  • a phase difference acquisition device applied to an electronic device, the electronic device includes an image sensor, the image sensor includes a plurality of pixel point groups arranged in an array, and each of the pixel point groups includes M* arranged in an array N pixels; each pixel corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2; the device includes:
  • the scene detection module is used to perform scene detection on the captured image to obtain the scene type
  • a phase difference acquisition module configured to acquire a phase difference value corresponding to the scene type through the image sensor, where the phase difference value is a phase difference value in a first direction or a phase difference value in a second direction; the first The direction forms a preset angle with the second direction.
  • An electronic device includes a memory and a processor, and a computer program is stored in the memory.
  • the computer program is executed by the processor, the operation of the method is realized.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the operation of the method is realized.
  • the above-mentioned phase difference acquisition method and device, electronic equipment, and computer-readable storage medium are used to detect the scene of the image to obtain the scene type, and calculate the corresponding phase difference according to the scene type.
  • the phase difference value may be the phase difference in the first direction. Value or the phase difference value in the second direction, and the first direction and the second direction are at a preset angle, that is, the phase difference of each pixel can be accurately obtained, and after scene detection, only the phase difference in one direction needs to be calculated , There is no need to calculate the phase difference between the two directions, which greatly saves the calculation time, improves the calculation speed, and further improves the focusing speed.
  • FIG. 1 is a schematic diagram of the principle of phase detection autofocus in an embodiment
  • FIG. 2 is a schematic diagram of phase detection pixel points arranged in pairs in the pixel points included in the image sensor;
  • FIG. 3 is a schematic diagram of a part of the structure of an image sensor in an embodiment
  • FIG. 4 is a schematic diagram of the structure of pixels in an embodiment
  • Figure 5 is a schematic structural diagram of an imaging device in an embodiment
  • Fig. 6 is a schematic diagram of a filter set on a pixel point group in an embodiment
  • FIG. 7A is a flowchart of a method for acquiring a phase difference in an embodiment
  • Fig. 7B is a schematic diagram of a horizontal texture scene in an embodiment
  • FIG. 7C is a schematic diagram of a circular scene in an embodiment
  • FIG. 8 is a flowchart of calculating the phase difference corresponding to the scene type in an embodiment
  • Fig. 9 is a schematic diagram of a pixel point group in an embodiment
  • FIG. 10 is a schematic diagram of a sub-luminance map of an embodiment
  • FIG. 11 is a flowchart of obtaining a target brightness map in an embodiment
  • FIG. 12 is a schematic diagram of generating a sub-brightness map corresponding to the pixel point group according to the brightness value of the sub-pixel points included in the target pixel point in the pixel point group in an embodiment
  • FIG. 13 is a flowchart of obtaining a target brightness map in another embodiment
  • FIG. 14 is a schematic diagram of determining pixel points at the same position from each pixel point group in an embodiment
  • 15 is a schematic diagram of determining pixels at the same position from each intermediate phase difference map in an embodiment
  • FIG. 16 is a schematic diagram of a target phase difference diagram in an embodiment
  • FIG. 17 is a flowchart of a method for performing segmentation processing on a target brightness map to obtain a first segmented brightness map and a second segmented brightness map in an embodiment
  • 18 is a schematic diagram of generating a first segmented brightness map and a second segmented brightness map according to the target brightness map in an embodiment
  • 19 is a schematic diagram of generating a first segmented brightness map and a second segmented brightness map according to the target brightness map in another embodiment
  • 20 is a flowchart of determining the phase difference of mutually matched pixels according to the position difference of the pixels that match each other in the first split brightness map and the second split brightness map in an embodiment
  • 21 is a flowchart of determining the phase difference of the pixels that match each other according to the position difference of the pixels that match each other in the first split brightness map and the second split brightness map in another embodiment
  • FIG. 22 is a structural block diagram of an apparatus for acquiring a phase difference in an embodiment
  • FIG. 23 is a block diagram of a computer device provided by an embodiment of this application.
  • FIG. 1 is a schematic diagram of the principle of phase detection auto focus (PDAF).
  • M1 is the position of the image sensor when the imaging device is in the in-focus state, where the in-focus state refers to the state of successful focusing.
  • the imaging light g reflected by the object W toward the lens Lens in different directions converges on the image sensor, that is, the imaging light g reflected by the object W toward the lens Lens in different directions is in the image
  • the image is imaged at the same position on the sensor. At this time, the image of the image sensor is clear.
  • M2 and M3 are the possible positions of the image sensor when the imaging device is not in focus.
  • the image sensor when the image sensor is at the M2 position or the M3 position, the object W is reflected in different directions of the lens Lens.
  • the imaging light g will be imaged at different positions. Please refer to Figure 1.
  • the imaging light g reflected by the object W in different directions to the lens Lens is imaged at the position A and the position B respectively.
  • the image sensor is at the M3 position
  • the object W is reflected toward the lens.
  • the imaging light g in different directions of the lens Lens is imaged at the position C and the position D respectively. At this time, the image of the image sensor is not clear.
  • the difference in position of the image formed by the imaging light entering the lens from different directions in the image sensor can be obtained.
  • the difference between position A and position B can be obtained, or, Obtain the difference between position C and position D; after obtaining the difference in position of the image formed by the imaging light entering the lens from different directions in the image sensor, the difference and the difference between the lens and the image sensor in the camera
  • the geometric relationship is used to obtain the defocus distance.
  • the so-called defocus distance refers to the distance between the current position of the image sensor and the position where the image sensor should be in the in-focus state; the imaging device can focus according to the obtained defocus distance.
  • the calculated PD value is 0.
  • the larger the calculated value the farther the position of the clutch focal point is, and the smaller the value, the closer the clutch focal point.
  • phase detection pixel points may be provided in pairs in the pixel points included in the image sensor.
  • a phase detection pixel point pair (hereinafter referred to as a pixel point pair) A may be provided in the image sensor.
  • one phase detection pixel is subjected to left shielding (English: Left Shield), and the other phase detection pixel is subjected to right shielding (English: Right Shield).
  • phase detection pixel point that has been blocked on the left only the right beam of the imaging beam directed to the phase detection pixel point can be in the photosensitive part of the phase detection pixel point (that is, the part that is not blocked). ).
  • phase detection pixel that has been occluded on the right only the left beam of the imaging beam directed at the phase detection pixel can be in the photosensitive part of the phase detection pixel (that is, not The occluded part) is imaged. In this way, the imaging beam can be divided into left and right parts, and the phase difference can be obtained by comparing the images formed by the left and right parts of the imaging beam.
  • the phase detection pixels set in the image sensor are usually blocked on the left and right sides respectively, for scenes with horizontal texture, the PD value cannot be calculated by the phase detection pixels.
  • the shooting scene is a horizontal line, and the left and right images will be obtained according to the PD characteristics, but the PD value cannot be calculated.
  • an imaging component is provided in the embodiment of the application, which can be used to detect the phase difference value in the first direction and the second The phase difference value of the direction, for a horizontal texture scene, the phase difference value of the second direction can be used to achieve focusing.
  • the present application provides an imaging assembly.
  • the imaging component includes an image sensor.
  • the image sensor may be a metal oxide semiconductor device (English: Complementary Metal Oxide Semiconductor; abbreviation: CMOS) image sensor, a charge-coupled device (English: Charge-coupled Device; abbreviation: CCD), a quantum thin film sensor, or an organic sensor.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge-coupled Device
  • quantum thin film sensor or an organic sensor.
  • FIG. 3 is a schematic diagram of a part of the image sensor in an embodiment.
  • the image sensor includes a plurality of pixel point groups Z arranged in an array, each pixel point group Z includes a plurality of pixel points D arranged in an array, and each pixel point D corresponds to a photosensitive unit.
  • the multiple pixels include M*N pixels, where both M and N are natural numbers greater than or equal to 2.
  • Each pixel point D includes a plurality of sub-pixel points d arranged in an array. That is, each photosensitive unit can be composed of a plurality of photosensitive elements arranged in an array. Among them, the photosensitive element is an element that can convert light signals into electrical signals. Referring to FIG.
  • each pixel point group Z includes 4 pixel points D arranged in a 2*2 array, and each pixel point D may include 4 sub-pixel points d arranged in a 2*2 array.
  • the four sub-pixel points d jointly cover a microlens W.
  • each pixel point D includes 2*2 photodiodes, and the 2*2 photodiodes are arranged correspondingly to the 4 sub-pixel points d arranged in a 2*2 array.
  • Each photodiode is used to receive optical signals and perform photoelectric conversion, thereby converting the optical signals into electrical signals for output.
  • the 4 sub-pixels d included in each pixel D are set corresponding to the same color filter, so each pixel D corresponds to a color channel, such as the red R channel, or the green channel G, or the blue channel B .
  • each sub-pixel point in pixel point D can be determined The PD value (phase difference value) in the second direction.
  • the signals of sub-pixel point 1 and sub-pixel point 3 are combined and output, and the signals of sub-pixel point 2 and sub-pixel point 4 are combined and output, thereby constructing two PD pixel pairs along the first direction (ie, horizontal direction).
  • the phase value of can determine the PD value (phase difference value) of each sub-pixel point in the pixel point D along the first direction.
  • Fig. 5 is a schematic structural diagram of an imaging device in an embodiment.
  • the imaging device includes a lens 50, a filter 52 and an imaging component 54.
  • the lens 50, the filter 52 and the imaging component 54 are sequentially located on the incident light path, that is, the lens 50 is disposed on the filter 52, and the filter 52 is disposed on the imaging component 54.
  • the imaging component 54 includes the image sensor in FIG. 3.
  • the image sensor includes a plurality of pixel point groups Z arranged in an array.
  • Each pixel point group Z includes a plurality of pixel points D arranged in an array.
  • Each pixel point D corresponds to a photosensitive unit, and each photosensitive unit can consist of multiple arrays. It is composed of arranged photosensitive elements.
  • each pixel point D includes 4 sub-pixel points d arranged in a 2*2 array, and each sub-pixel point d corresponds to a light spot diode 542, that is, 2*2 photodiodes 542 and 2*2 arrays.
  • the 4 sub-pixel points d of the cloth are correspondingly arranged.
  • the 4 sub-pixel points d share one lens.
  • the filter 52 can include three types of red, green, and blue, and can only transmit light of corresponding wavelengths of red, green, and blue, respectively.
  • the four sub-pixel points d included in one pixel point D are arranged corresponding to the filters of the same color.
  • the filter may also be white, which facilitates the passage of light in a larger spectrum (wavelength) range and increases the luminous flux passing through the white filter.
  • the lens 50 is used to receive incident light and transmit the incident light to the filter 52. After the filter 52 performs filtering processing on the incident light, the filtered light is incident on the imaging component 54 on a pixel basis.
  • the photosensitive unit in the image sensor included in the imaging component 54 converts the light incident from the filter 52 into a charge signal through the photoelectric effect, and generates a pixel signal consistent with the charge signal.
  • the charge signal is consistent with the received light intensity.
  • the pixels included in the image sensor and the pixels included in the image are two different concepts.
  • the pixels included in the image refer to the smallest component unit of the image, which is generally represented by a sequence of numbers.
  • the sequence of numbers can be referred to as the pixel value of a pixel.
  • the embodiments of the present application involve both concepts of "pixels included in an image sensor" and "pixels included in an image”. To facilitate readers' understanding, a brief explanation is provided here.
  • Fig. 6 is a schematic diagram of a filter set on a pixel point group in an embodiment.
  • the pixel group Z includes 4 pixels D arranged in an array arrangement of two rows and two columns, wherein the color channel of the pixels in the first row and the first column is green, that is, the first row and the first row
  • the filter set on the pixels in one column is a green filter; the color channel of the pixels in the first row and second column is red, that is, the filter set on the pixels in the first row and second column
  • the filter is a red filter; the color channel of the pixel in the second row and the first column is blue, that is, the filter set on the pixel in the second row and the first column is a blue filter;
  • the color channel of the pixel points in the second row and second column is green, that is, the filter set on the pixel points in the second row and second column is a green filter.
  • Fig. 7A is a flowchart of a method for acquiring a phase difference in an embodiment.
  • the method for acquiring the phase difference in this embodiment is described by taking the imaging device in FIG. 5 as an example.
  • the focusing method includes operation 702 to operation 706.
  • scene detection is performed on the captured image to obtain the scene type.
  • the captured image when an image is captured by an imaging device of an electronic device, the captured image contains scene information, and different shooting objects may have different scene information.
  • the horizontal direction is a straight line, as shown in FIG. 7B, the phase difference value in the horizontal direction cannot be calculated, and the phase difference value in the vertical direction needs to be calculated to facilitate subsequent focusing.
  • you are shooting a basketball and the horizontal direction is not a straight line, as shown in Figure 7C, you can calculate the phase difference value in the horizontal direction. It is not necessary to calculate the phase difference value in the vertical direction, and it can also facilitate subsequent focusing. This scenario does not need to calculate the phase difference value in the vertical direction, which saves the time consumed in the calculation.
  • an artificial intelligence model or an edge operator can be used to detect the scene of the captured image to obtain the scene type.
  • the scene types may include horizontal texture scenes, vertical texture scenes, circular texture scenes, and so on.
  • Operation 704 Obtain a phase difference value corresponding to the scene type through an image sensor, where the phase difference value is a phase difference value in a first direction or a phase difference value in a second direction; Set the angle.
  • the phase difference value corresponding to the scene type refers to the phase difference value that can be used for focusing for the scene type.
  • the phase difference value in the horizontal direction cannot be calculated.
  • Data is collected by the above-mentioned image sensor including M*N pixel points arranged in an array, and then the phase difference value in the vertical direction is calculated.
  • the first direction and the second direction may form a preset angle, and the preset angle may be any angle other than 0 degrees, 180 degrees, and 360 degrees.
  • the first direction may be a horizontal direction
  • the second direction may be a vertical direction.
  • phase difference acquisition method scene detection is performed on the image to obtain the scene type, and the corresponding phase difference is calculated according to the scene type.
  • the phase difference value can be the phase difference value in the first direction or the phase difference value in the second direction, and the first One direction and the second direction are at a preset angle, that is, the phase difference of each pixel can be accurately obtained, and after the scene detection, only the phase difference of the corresponding one direction needs to be calculated, and the phase difference of the two directions is not required to be calculated. This saves calculation time, increases the calculation speed, and further improves the speed of focusing.
  • performing scene detection on the captured image to obtain the scene type includes: performing scene detection on the captured image through an artificial intelligence model to obtain the scene type, and the artificial intelligence model is trained by using a sample image containing the scene type owned.
  • sample images containing scene types can be collected in advance, and then the artificial intelligence model can be trained to obtain artificial intelligence models that can detect different scene types.
  • the trained artificial intelligence model is stored in the electronic device, and scene detection is performed on the captured image during shooting to obtain the scene type.
  • the artificial intelligence model can efficiently and accurately detect the scene type.
  • the scene detection performed on the captured image to obtain the scene type includes: detecting the total number of edge points in the scene of the captured image, the number of edge points in the first direction, and the number of edge points in the second direction through an edge operator Determine the scene type of the captured image according to the ratio of the number of edge points in the first direction to the total number of edge points, and the ratio of the number of edge points in the second direction to the total number of edge points.
  • the edge operator can be configured according to actual conditions.
  • the edge operators include discrete gradient operator, Roberts operator, Laplacian operator, gradient operator and Sobel operator. Sobel's horizontal edge operator is The vertical edge operator can be
  • the total number of edge points in the scene of the captured image, the number of edge points in the first direction and the number of edge points in the second direction can be counted.
  • the ratio of edge points in the first direction to the total number of edge points exceeds the threshold, it indicates the scene It is a horizontal texture scene, and when the ratio of the number of edge points in the second direction to the total number of edge points exceeds a threshold, it indicates that the scene is a vertical texture scene.
  • the ratio of edge points in the first direction to the total number of edge points exceeds the threshold, and the ratio of the number of edge points in the second direction to the total number of edge points exceeds the threshold it indicates that the scene includes a horizontal texture scene and a vertical texture scene.
  • calculate the PD value in the vertical direction calculate the PD value in the horizontal direction.
  • the scene type can be quickly detected.
  • calculating the phase difference value corresponding to the scene type includes: when the scene type is a horizontal texture scene, acquiring the phase difference value in the second direction through the image sensor; when the scene type is vertical For straight texture scenes, the phase difference value in the first direction is obtained through the image sensor.
  • the above method further includes: determining a defocus distance value according to the phase difference value; and controlling the lens to move to focus according to the defocus distance value.
  • the corresponding relationship between the phase difference value and the defocus distance value can be obtained through calibration.
  • defocus PD*slope(DCC), where DCC (Defocus Conversion Coefficient) is obtained by calibration, defocus is the defocus distance value, slope is the slope function; PD is the phase difference value.
  • DCC Defocus Conversion Coefficient
  • the calibration process of the corresponding relationship between the phase difference value and the defocus distance value includes: dividing the effective focus stroke of the camera module into 10 equal parts, namely (near focus DAC-far focus DAC)/10, so as to cover the focus of the motor Range; focus at each focus DAC (DAC can be 0 to 1023) position, and record the phase difference of the current focus DAC position; after completing the motor focus stroke, take a group of 10 focus DACs and compare the obtained PD value ; Generate 10 similar ratios K, fit the two-dimensional data composed of DAC and PD to obtain a straight line with slope K.
  • the direction of movement can be determined according to the positive or negative value of the defocus distance.
  • the lens movement is controlled to realize the phase detection autofocus. Because the phase difference value in the first direction or the phase difference value in the second direction can be output according to the scene, it can effectively target horizontal or vertical texture scenes.
  • the scene uses the phase difference value to focus, which improves the accuracy and stability of focusing, and only needs to calculate the phase difference value in one direction, which can save the time of calculating the phase difference value and increase the focusing speed, which can be applied to the focusing of moving objects.
  • the above method further includes: correcting the calculated phase difference value by using a gain map.
  • the gain map can be pre-calibrated.
  • the gain graph contains the gain coefficient of the PD value corresponding to each pixel, and the calculated PD value is multiplied by the corresponding gain coefficient to obtain the corrected PD value. By correcting the PD value, the calculated PD value is more accurate.
  • the frequency domain algorithm and the space domain algorithm can be used to obtain the phase difference value.
  • the frequency domain algorithm is to use the characteristics of Fourier displacement to calculate, the collected target brightness map is converted from the spatial domain to the frequency domain using the Fourier transform, and then the phase correlation is calculated.
  • the correlation calculates the maximum value (peak), it means that it has the maximum value.
  • Displacement at this time, do the inverse Fourier to know the maximum displacement in the space domain.
  • the spatial domain algorithm is to find out the feature points, such as edge features, DoG (difference of Gaussian), Harris corner points and other features, and then use these feature points to calculate the displacement.
  • Fig. 8 is a flow chart of obtaining the phase difference in an embodiment. As shown in Figure 8, the acquisition of the phase difference includes:
  • a target brightness map is obtained according to the brightness values of the pixel points included in each of the pixel point groups.
  • the brightness value of the pixel of the image sensor can be characterized by the brightness value of the sub-pixel included in the pixel.
  • the imaging device may obtain the target brightness map according to the brightness values of the sub-pixel points in the pixel points included in each pixel point group.
  • the brightness value of a sub-pixel point refers to the brightness value of the light signal received by the photosensitive element corresponding to the sub-pixel point.
  • the sub-pixel included in the image sensor is a photosensitive element that can convert light signals into electrical signals. Therefore, the light signal received by the sub-pixel can be obtained according to the electrical signal output by the sub-pixel. Intensity, the brightness value of the sub-pixel can be obtained according to the intensity of the light signal received by the sub-pixel.
  • the target brightness map in the embodiment of the present application is used to reflect the brightness value of the sub-pixels in the image sensor.
  • the target brightness map may include multiple pixels, wherein the pixel value of each pixel in the target brightness map is based on the image sensor Obtained from the brightness value of the neutron pixel.
  • Operation 804 Perform segmentation processing on the target brightness map to obtain a first segmented brightness map and a second segmented brightness map, and compare the first segmented brightness map and the second segmented brightness map to each other. The position difference of the matched pixel points determines the phase difference value of the mutually matched pixel points.
  • the imaging device may perform segmentation processing on the target luminance map along the column direction (the y-axis direction in the image coordinate system). In the process of segmenting the target luminance map along the column direction, Each dividing line of the segmentation process is vertical to the direction of the column.
  • the imaging device may perform segmentation processing on the target brightness map along the row direction (the x-axis direction in the image coordinate system), and in the process of segmenting the target brightness map along the row direction , Each dividing line of the segmentation process is vertical to the direction of the row.
  • the first segmented brightness map and the second segmented brightness map obtained after the target brightness map is segmented along the column direction can be referred to as the upper image and the lower image, respectively.
  • the first segmented brightness map and the second segmented brightness map obtained after the target brightness map is segmented along the row direction can be called the left image and the right image, respectively.
  • mutant pixels means that the pixel matrix composed of the pixel itself and the surrounding pixels are similar to each other.
  • the pixel a and its surrounding pixels in the first segmented brightness map form a pixel matrix with 3 rows and 3 columns, and the pixel value of the pixel matrix is:
  • the pixel b and the surrounding pixels in the second segmented brightness map also form a pixel matrix with 3 rows and 3 columns, and the pixel value of the pixel matrix is:
  • the two matrices are similar, and it can be considered that the pixel a and the pixel b match each other.
  • the pixel value of each corresponding pixel in the two pixel matrices can be calculated, and then the absolute value of the difference obtained is added, and the result of the addition is used to determine Whether the pixel matrix is similar, that is, if the result of the addition is less than a preset threshold, the pixel matrix is considered to be similar; otherwise, the pixel matrix is considered to be dissimilar.
  • the difference of 1 and 2 the difference of 15 and 15, the difference of 70 and 70, ..., and then the absolute difference Values are added, and the result of the addition is 3. If the result of the addition of 3 is less than the preset threshold, it is considered that the two pixel matrices with 3 rows and 3 columns are similar.
  • Another way to judge whether the pixel matrix is similar is to use the Sobel convolution kernel calculation method or the high Laplacian calculation method to extract the edge characteristics, and judge whether the pixel matrix is similar by the edge characteristics.
  • the positional difference of pixels that match each other refers to the positions of the pixels in the first split brightness map and the positions of the pixels in the second split brightness map among the matched pixels.
  • the difference As in the above example, the position difference between the pixel a and the pixel b that are matched with each other refers to the difference between the position of the pixel a in the first split brightness map and the position of the pixel b in the second split brightness map.
  • the pixels that match each other correspond to different images in the image sensor formed by the imaging light entering the lens from different directions.
  • the pixel a in the first split brightness map and the pixel b in the second split brightness map match each other, where the pixel a may correspond to the image formed at position A in FIG. 1, and the pixel b may correspond to The image formed at position B in Figure 1.
  • the phase difference of the matched pixels can be determined according to the position difference of the matched pixels. .
  • Operation 806 Determine the phase difference value in the first direction or the phase difference value in the second direction according to the phase difference values of the mutually matched pixels.
  • the phase difference value in the first direction can be determined according to the phase difference of the pixel a and the pixel b that are matched with each other.
  • the second split brightness map includes odd-numbered columns
  • pixel a in the first split brightness map and pixel b in the second split brightness map Mutual matching, based on the phase difference between the matched pixel a and the pixel b, the phase difference value in the second direction can be determined.
  • the brightness value of the pixel points in the above pixel point group obtains the target brightness map.
  • the phase difference value of the matching pixels can be quickly determined, and the phase difference value of the matching pixels can be quickly determined.
  • the rich phase difference value can improve the accuracy of the phase difference value and improve the accuracy and stability of the focus.
  • each of the pixel points includes a plurality of sub-pixel points arranged in an array, and obtaining a target brightness map according to the brightness value of the pixel points included in each pixel point group includes: According to the pixel point group, according to the brightness value of the sub-pixel point at the same position of each pixel point in the pixel point group, the sub-brightness map corresponding to the pixel point group is obtained; The sub-luminance map generates the target luminance map.
  • the sub-pixel points at the same position of each pixel point refer to the sub-pixel points that are arranged in the same position in each pixel point.
  • Fig. 9 is a schematic diagram of a pixel point group in an embodiment.
  • the pixel point group includes 4 pixels arranged in an array arrangement of two rows and two columns.
  • the sub-pixels d11, d21, d31, and d41 are arranged in the same position in each pixel, and they are all in the first row and first column.
  • the sub-pixels d12, d22, d32, and d42 are in each pixel.
  • the arrangement positions in are the same in the first row and second column, and the sub-pixels d13, d23, d33, and d43 are arranged in the same position in each pixel, and they are all in the second row and first column, and the sub-pixel d14 , D24, d34 and d44 are arranged in the same position in each pixel, and they are all in the second row and second column.
  • obtaining the sub-brightness map corresponding to the pixel point group according to the brightness value of the sub-pixel point at the same position of each pixel point in the pixel point group may include operations A1 to A3.
  • the imaging device determines a sub-pixel point at the same position from each pixel point to obtain a plurality of sub-pixel point sets.
  • the positions of the sub-pixels included in each sub-pixel set are the same in the pixel points.
  • the imaging device determines the sub-pixels at the same position from D1 pixel, D2 pixel, D3 pixel and D4 pixel respectively, and can obtain 4 sub-pixel sets J1, J2, J3, and J4, among which, the sub-pixel set J1 includes sub-pixels d11, d21, d31, and d41. The positions of the sub-pixels included in the pixel are all the same in the first row and first column.
  • the sub-pixel set J2 includes sub-pixels d12, d22, and d32. And d42, the positions of the sub-pixels included in it are the same in the first row and second column.
  • the sub-pixel set J3 includes sub-pixels d13, d23, d33 and d43, and the sub-pixels included are The positions of the pixels are all the same, which is the second row and the first column.
  • the sub-pixel set J4 includes sub-pixels d14, d24, d34, and d44. The positions of the sub-pixels included in the pixel are all the same. Two rows and second column.
  • the imaging device obtains the brightness value corresponding to the sub-pixel point set according to the brightness value of each sub-pixel point in the sub-pixel point set.
  • the imaging device may determine a color coefficient corresponding to each sub-pixel point in the sub-pixel point set, where the color coefficient is determined according to the color channel corresponding to the sub-pixel point.
  • the sub-pixel point d11 belongs to the D1 pixel point
  • the filter included in the D1 pixel point may be a green filter, that is, if the color channel of the D1 pixel point is green, then the sub-pixel point d11 included The color channel is also green, and the imaging device can determine the color coefficient corresponding to the sub-pixel d11 according to the color channel (green) of the sub-pixel d11.
  • the imaging device can multiply the color coefficient corresponding to each sub-pixel in the sub-pixel set by the brightness value to obtain each sub-pixel in the sub-pixel set The weighted brightness value of the point.
  • the imaging device may multiply the brightness value of the sub-pixel point d11 by the color coefficient corresponding to the sub-pixel point d11 to obtain the weighted brightness value of the sub-pixel point d11.
  • the imaging device may add the weighted brightness value of each sub-pixel point in the sub-pixel point set to obtain the brightness value corresponding to the sub-pixel point set.
  • the brightness value corresponding to the sub-pixel point set J1 can be calculated based on the following first formula.
  • Y_TL Y_21*C_R+(Y_11+Y_41)*C_G/2+Y_31*C_B.
  • Y_TL is the brightness value corresponding to the sub-pixel point set J1
  • Y_21 is the brightness value of the sub-pixel point d21
  • Y_11 is the brightness value of the sub-pixel point d11
  • Y_41 is the brightness value of the sub-pixel point d41
  • Y_31 is the sub-pixel point
  • C_R is the color coefficient corresponding to the sub-pixel point d21
  • C_G/2 is the color coefficient corresponding to the sub-pixel point d11 and d41
  • C_B is the color coefficient corresponding to the sub-pixel point d31
  • Y_21*C_R is the sub-pixel
  • the weighted brightness value of the point d21, Y_11*C_G/2 is the weighted brightness value of the sub-pixel point d11
  • Y_41*C_G/2 is the weighted brightness value of the sub-pixel point d41
  • Y_31*C_B is the weighted brightness value of
  • the brightness value corresponding to the sub-pixel point set J2 can be calculated based on the following second formula.
  • Y_TR Y_22*C_R+(Y_12+Y_42)*C_G/2+Y_32*C_B.
  • Y_TR is the brightness value corresponding to the sub-pixel point set J2
  • Y_22 is the brightness value of the sub-pixel point d22
  • Y_12 is the brightness value of the sub-pixel point d12
  • Y_42 is the brightness value of the sub-pixel point d42
  • Y_32 is the sub-pixel point
  • C_R is the color coefficient corresponding to sub-pixel point d22
  • C_G/2 is the color coefficient corresponding to sub-pixel point d12 and d42
  • C_B is the color coefficient corresponding to sub-pixel point d32
  • Y_22*C_R is the sub-pixel
  • the weighted brightness value of the point d22, Y_12*C_G/2 is the weighted brightness value of the sub-pixel point d12
  • Y_42*C_G/2 is the weighted brightness value of the sub-pixel point d42
  • Y_32*C_B is the weighted brightness value of the sub-pixel
  • the brightness value corresponding to the sub-pixel point set J3 can be calculated based on the following third formula.
  • Y_BL Y_23*C_R+(Y_13+Y_43)*C_G/2+Y_33*C_B.
  • Y_BL is the brightness value corresponding to the sub-pixel point set J3
  • Y_23 is the brightness value of the sub-pixel point d23
  • Y_13 is the brightness value of the sub-pixel point d13
  • Y_43 is the brightness value of the sub-pixel point d43
  • Y_33 is the sub-pixel point
  • C_R is the color coefficient corresponding to the sub-pixel point d23
  • C_G/2 is the color coefficient corresponding to the sub-pixel point d13 and d43
  • C_B is the color coefficient corresponding to the sub-pixel point d33
  • Y_23*C_R is the sub-pixel
  • the weighted brightness value of the point d23, Y_13*C_G/2 is the weighted brightness value of the sub-pixel point d13
  • Y_43*C_G/2 is the weighted brightness value of the sub-pixel point d43
  • Y_33*C_B is the weighted brightness value of
  • the brightness value corresponding to the sub-pixel point set J4 can be calculated based on the following fourth formula.
  • Y_BR Y_24*C_R+(Y_14+Y_44)*C_G/2+Y_34*C_B.
  • Y_BR is the brightness value corresponding to the sub-pixel point set J4
  • Y_24 is the brightness value of the sub-pixel point d24
  • Y_14 is the brightness value of the sub-pixel point d14
  • Y_44 is the brightness value of the sub-pixel point d44
  • Y_34 is the sub-pixel point
  • C_R is the color coefficient corresponding to sub-pixel point d24
  • C_G/2 is the color coefficient corresponding to sub-pixel points d14 and d44
  • C_B is the color coefficient corresponding to sub-pixel point d34
  • Y_24*C_R is the sub-pixel
  • the weighted brightness value of the point d24, Y_14*C_G/2 is the weighted brightness value of the sub-pixel point d14
  • Y_44*C_G/2 is the weighted brightness value of the sub-pixel point d44
  • Y_34*C_B is the weighted brightness value of the sub-pixel
  • the imaging device In operation A3, the imaging device generates a sub-brightness map according to the brightness value corresponding to each sub-pixel set.
  • the sub-luminance map includes a plurality of pixels, each pixel in the sub-luminance map corresponds to a sub-pixel set, and the pixel value of each pixel is equal to the brightness value corresponding to the corresponding sub-pixel set.
  • Fig. 10 is a schematic diagram of a sub-luminance map in an embodiment.
  • the sub-luminance map includes 4 pixels.
  • the pixels in the first row and the first column correspond to the sub-pixel set J1
  • the pixel value is Y_TL
  • the pixels in the first row and the second column correspond to the sub-pixels.
  • Set J2 corresponds to the pixel value Y_TR
  • the pixel in the second row and first column corresponds to the sub-pixel set J3 and its pixel value is Y_BL
  • the pixel in the second row and second column corresponds to the sub-pixel set J4, which The pixel value is Y_BR.
  • Fig. 11 is a flowchart of obtaining a target brightness map in an embodiment. As shown in FIG. 11, the method of obtaining the target brightness map may include the following operations:
  • a target pixel is determined from each pixel group to obtain multiple target pixels.
  • the pixel point group may include a plurality of pixel points arranged in an array, and the imaging device may determine a target pixel point from the plurality of pixel points included in each pixel point group, so as to obtain a plurality of target pixel points.
  • the imaging device may determine a pixel with a green color channel from each pixel group (that is, a pixel with a green filter included), and then, the color channel is a green The pixel is determined as the target pixel.
  • the pixels with the green color channel have better light sensitivity, the pixels with the green color channel in the pixel group are determined as the target pixel, and the target brightness map generated according to the target pixel in the subsequent operation is of higher quality .
  • a sub-brightness map corresponding to each pixel point group is generated according to the brightness value of the sub-pixel points included in each target pixel point.
  • the sub-luminance map corresponding to each pixel point group includes a plurality of pixels, and each pixel in the sub-luminance map corresponding to each pixel point group corresponds to a sub-pixel point included in the target pixel point in the pixel point group.
  • the pixel value of each pixel in the sub-luminance map corresponding to each pixel point group is the brightness value of the corresponding sub-pixel point.
  • FIG. 12 is a schematic diagram of generating the sub-luminance map L corresponding to the pixel point group Z1 according to the brightness values of the sub-pixel points included in the target pixel point DM in the pixel point group Z1 in an embodiment.
  • the sub-luminance map L includes 4 pixels, where each pixel corresponds to a sub-pixel included in the target pixel DM, and the pixel value of each pixel is the value of the corresponding sub-pixel.
  • the brightness value where the pixels in the first row and first column of the sub-luminance map L correspond to the sub-pixels in the first row and first column included in the target pixel point DM, and the first row and first column in the sub-luminance map L
  • the pixel value Gr_TL of the pixel in the target pixel DM includes the brightness value Gr_TL of the sub-pixel in the first row and the first column of the target pixel DM
  • the pixel in the first row and second column in the sub-luminance map L and the pixel in the first row and second column of the target pixel DM include the first The sub-pixels in one row and the second column correspond, and the pixel value Gr_TR of the pixel in the first row and second column in the sub-luminance map L is the brightness value Gr_TR of the
  • the pixels in the second row and the second column of the sub-brightness map L and the target pixel DM include the second row of the second pixel.
  • the sub-pixels in the column correspond to the pixel value Gr_BR of the pixel in the second row and second column of the sub-brightness map L as the brightness value Gr_BR of the sub-pixel in the second row and second column included in the target pixel DM.
  • a target brightness map is generated according to the sub-brightness map corresponding to each pixel point group.
  • the imaging device can splice the sub-luminance maps corresponding to each pixel point group according to the array arrangement of each pixel point group in the image sensor to obtain the target luminance map.
  • Fig. 13 is a flowchart of obtaining a target brightness map in another embodiment. As shown in FIG. 13, the method of obtaining the target brightness map may include the following operations:
  • a pixel point at the same position is determined from each pixel point group to obtain a plurality of pixel point sets.
  • the positions of the pixels included in each pixel point set in the pixel point group are all the same.
  • the imaging device determines the pixel points at the same position from the pixel point group Z1, the pixel point group Z2, the pixel point group Z3, and the pixel point group Z4, and four pixel point sets P1, P2, P3 can be obtained.
  • the pixel point set P1 includes pixels D11, D21, D31, and D41, the positions of the pixels included in the pixel point group are all the same in the first row and the first column
  • the pixel point set P2 includes pixels D12, D22, D32 and D42, the positions of the pixels included in the pixel group are all the same in the first row and second column.
  • the pixel set P3 includes sub-pixels D13, D23, D33 and D43, which include The positions of the pixels in the pixel group are the same, which is the second row and the first column.
  • the pixel group P4 includes the pixels D14, D24, D34 and D44, and the positions of the pixels included in the pixel group are all the same. For the second row and second column.
  • the imaging device In operation 1304, the imaging device generates a plurality of target brightness maps corresponding to the plurality of pixel point sets one-to-one according to the brightness values of the pixels in the plurality of pixel point sets.
  • the brightness value of the pixel of the image sensor can be characterized by the brightness value of the sub-pixels included in the pixel. Therefore, for each pixel set, the imaging device can be based on each pixel in the pixel set. The brightness value of each sub-pixel included in the pixel generates a target brightness map corresponding to the set of pixel points.
  • the target brightness map corresponding to a certain pixel point set includes a plurality of pixels, each pixel in the target brightness map corresponds to a sub-pixel point of the pixel points included in the pixel point set, and the target brightness map The pixel value of each pixel is the brightness value of the corresponding sub-pixel.
  • the imaging device determines a pixel point (that is, the target pixel point) from each pixel point group, and generates the target brightness map according to the determined pixel point, in other words, In the second method of acquiring the target brightness map, the imaging device generates a target brightness map according to one pixel in each pixel point group.
  • the imaging device In the method of obtaining the target brightness map in FIG. 13, the imaging device generates a target brightness map according to one pixel in each pixel point group, and generates another one according to another pixel point in each pixel point group.
  • the target brightness map, and at the same time, another target brightness map is generated according to another pixel in each pixel point group, and so on.
  • the number of target brightness maps acquired by the imaging device is the same as the number of pixel points included in the pixel point group.
  • the imaging device After obtaining multiple target brightness maps, for each target brightness map, the imaging device performs segmentation processing on the target brightness map, and obtains the first segmented brightness map and the second segmented brightness map according to the segmentation processing results.
  • the imaging device can obtain the intermediate phase difference map according to the phase difference of the matching pixels in the first segment brightness map and the second segment brightness map corresponding to the target brightness map, and then the imaging device
  • the target phase difference map can be obtained according to the intermediate phase difference map corresponding to each target brightness map.
  • the accuracy of obtaining the target phase difference map is relatively high.
  • the pixel point group includes 4 pixels
  • the accuracy of the target phase difference map obtained in this way is obtained by the second method of obtaining the target brightness map. 4 times the accuracy of the target phase difference map.
  • the embodiment of the present application will describe the technical process of obtaining the target phase difference map according to the intermediate phase difference map corresponding to each target brightness map, and the technical process may include operation B1 to operation B3.
  • the imaging device determines pixels at the same position from each intermediate phase difference map to obtain a plurality of phase difference pixel sets.
  • the positions of the pixels included in each phase difference pixel set in the intermediate phase difference map are all the same.
  • the imaging device determines the pixels at the same position from the intermediate phase difference diagram 1, the intermediate phase difference diagram 2, the intermediate phase difference diagram 3, and the intermediate phase difference diagram 4 respectively, and can obtain 4 phase difference pixel sets Y1, Y2 , Y3, and Y4, where the phase difference pixel set Y1 includes the pixel PD_Gr_1 in the intermediate phase difference diagram 1, the pixel PD_R_1 in the intermediate phase difference diagram 2, the pixel PD_B_1 in the intermediate phase difference diagram 3, and the pixel PD_B_1 in the intermediate phase difference diagram 4
  • the pixel PD_Gb_1, the phase difference pixel set Y2 includes the pixel PD_Gr_2 in the intermediate phase difference diagram 1, the pixel PD_R_2 in the intermediate phase difference diagram 2, the pixel PD_B_2 in the intermediate phase difference diagram 3, and the pixel PD_Gb_2 in the intermediate phase difference diagram 4.
  • the difference pixel set Y3 includes the pixel PD_Gr_3 in the intermediate phase difference diagram 1, the pixel PD_R_3 in the intermediate phase difference diagram 2, the pixel PD_B_3 in the intermediate phase difference diagram 3, and the pixel PD_Gb_3 in the intermediate phase difference diagram 4, and the phase difference pixel set Y4 It includes the pixel PD_Gr_4 in the intermediate phase difference diagram 1, the pixel PD_R_4 in the intermediate phase difference diagram 2, the pixel PD_B_4 in the intermediate phase difference diagram 3, and the pixel PD_Gb_4 in the intermediate phase difference diagram 4.
  • the imaging device stitches the pixels in the phase difference pixel set to obtain a sub-phase difference map corresponding to the phase difference pixel set.
  • the sub-phase difference map includes a plurality of pixels, each pixel corresponds to a pixel in the phase difference pixel set, and the pixel value of each pixel is equal to the pixel value of the corresponding pixel.
  • the imaging device stitches the obtained multiple sub-phase difference maps to obtain the target phase difference map.
  • FIG. 16 is a schematic diagram of a target phase difference diagram, where the target phase difference diagram includes subphase difference diagram 1, subphase difference diagram 2, subphase difference diagram 3, and subphase difference diagram 4, where the subphase difference diagram Figure 1 corresponds to the phase difference pixel set Y1, the sub phase difference diagram 2 corresponds to the phase difference pixel set Y2, the sub phase difference diagram 3 corresponds to the phase difference pixel set Y3, and the sub phase difference diagram 4 corresponds to the phase difference pixel set Y4 Corresponding.
  • FIG. 17 is a flowchart of a method of performing segmentation processing on the target brightness map to obtain the first segmented brightness map and the second segmented brightness map in an embodiment, which can be applied to the imaging device shown in FIG. 3, as shown in FIG. As shown in 17, this method can include the following operations:
  • segmentation processing is performed on the target luminance map to obtain multiple luminance map regions.
  • each brightness map area includes a row of pixels in the target brightness map, or each brightness map area includes a column of pixels in the target brightness map.
  • the imaging device may segment the target brightness map column by column along the row direction to obtain multiple pixel columns of the target brightness map (that is, the brightness map area described above).
  • the imaging device may segment the target brightness map row by row along the column direction to obtain multiple pixel rows of the target brightness map (that is, the brightness map area described above).
  • a plurality of first brightness map regions and a plurality of second brightness map regions are obtained from the plurality of brightness map regions.
  • the first luminance map area includes pixels in even-numbered rows in the target luminance map, or the first luminance map area includes pixels in even-numbered columns in the target luminance map.
  • the second brightness map area includes pixels in odd rows in the target brightness map, or the second brightness map area includes pixels in odd columns in the target brightness map.
  • the imaging device may determine the even-numbered columns as the first luminance map area and the odd-numbered columns as the second luminance map area.
  • the imaging device may determine even-numbered rows as the first luminance map area, and odd-numbered rows as the second luminance map area.
  • Operation 1706 using a plurality of first brightness map regions to form a first segmented brightness map, and using a plurality of second brightness map regions to form a second segmented brightness map.
  • the imaging device can divide the first column pixel, the third column pixel, and the fifth column of the target brightness map.
  • the column of pixels is determined as the second luminance map area
  • the second, fourth, and sixth column pixels of the target luminance map can be determined as the first luminance map area
  • the imaging device can perform the first luminance map area Splicing to obtain the first sub-brightness map T1
  • the first sub-brightness map T1 includes the second, fourth, and sixth columns of the target brightness map
  • the imaging device can splice the second brightness map area
  • a second segmented brightness map T2 is obtained, and the second segmented brightness map T2 includes the first column of pixels, the third column of pixels, and the fifth column of pixels of the target brightness map.
  • the imaging device can divide the first row of pixels, the third row of pixels, and the fifth row of the target brightness map. Row pixels are determined as the second luminance map area, and the second row of pixels, the fourth row of pixels, and the sixth row of pixels of the target luminance map can be determined as the first luminance map area. Then, the imaging device can perform the first luminance map area.
  • the first sub-brightness map T3 includes the second row of pixels, the fourth row of pixels, and the sixth row of pixels of the target brightness map, and the imaging device can splice the second brightness map area ,
  • a second segmented brightness map T4 is obtained, and the second segmented brightness map T4 includes the first row of pixels, the third row of pixels, and the fifth row of pixels of the target brightness map.
  • this method may include the following operations:
  • a first set of neighboring pixels is determined in each row of pixels included in the first partial luminance map.
  • the pixels included in the first adjacent pixel set correspond to the same pixel point group in the image sensor.
  • the luminance map area includes a row of pixels in the target luminance map, that is, when the imaging device divides the target luminance map row by row along the column direction.
  • the two pixels in the first row of the sub-luminance map will be located in the same brightness map area. And will be located in the same sub-brightness map.
  • the two pixels in the second row of the sub-brightness map will also be located in the same brightness area, and will be located in another sub-brightness map, assuming this sub-brightness map
  • the first row of is located in the even-numbered pixel row of the target brightness map, then the two pixels in the first row of the sub-brightness map are located in the first sub-brightness map, and the two pixels in the second row of the sub-brightness map are located in the first sub-brightness map.
  • Two-division brightness map Two-division brightness map.
  • the imaging device can determine the two pixels in the first row of the sub-brightness map as the first adjacent pixel set, because the two pixels in the first row of the sub-brightness map are the same pixel group in the image sensor (Fig. The pixel point group shown in 8) corresponds.
  • the imaging device searches for a first set of matching pixels corresponding to the first set of neighboring pixels in the second segmented luminance map.
  • the imaging device may obtain a plurality of pixels around the first set of neighboring pixels in the first sub-brightness map, and determine the number of pixels around the first set of neighboring pixels and the set of neighboring pixels.
  • a plurality of pixels form a search pixel matrix.
  • the search pixel matrix may include 9 pixels in 3 rows and 3 columns.
  • the imaging device may search for a pixel matrix similar to the search pixel matrix in the second segmented brightness map.
  • how to determine whether the pixel matrices are similar it has been described above, and the details of the embodiment of the present application are not repeated here.
  • the imaging device may extract the first matching pixel set from the searched pixel matrix.
  • the pixels in the first matching pixel set and the pixels in the first adjacent pixel set obtained through the search respectively correspond to different images in the image sensor formed by imaging light entering the lens from different directions.
  • the position difference between the first set of adjacent pixels and the first set of matched pixels refers to the difference between the position of the first set of adjacent pixels in the first segmented luminance map and the position of the first set of matched pixels in the second segmented luminance map. difference.
  • the phase difference obtained through the upper and lower images can reflect the difference in the imaging position of the object in the vertical direction.
  • this method may include the following operations:
  • a second set of neighboring pixels is determined in each column of pixels included in the first partial luminance map, wherein the pixels included in the second set of neighboring pixels are the same
  • the pixel point group corresponds.
  • Operation 2104 for each second set of neighboring pixels, search for a second set of matching pixels corresponding to the second set of neighboring pixels in the second segmented brightness map.
  • Operation 2106 Determine the phase difference between the second adjacent pixel set and the second matched pixel set corresponding to each other according to the position difference between each second adjacent pixel set and each second matched pixel set, to obtain a phase difference value in the first direction .
  • the obtained first segmented brightness map and the second segmented brightness map can be called the left image and the right image, respectively, and the phase obtained by the left image and the right image
  • the difference can reflect the difference in the imaging position of the object in the horizontal direction.
  • the acquired phase difference can reflect the difference in the imaging position of the object in the horizontal direction.
  • the luminance map area includes a row of pixels in the target luminance map
  • the acquired phase difference The difference can reflect the difference in the imaging position of the object in the vertical direction. Therefore, the phase difference obtained according to the embodiment of the present application can reflect the difference in the imaging position of the object in the vertical direction as well as the imaging position of the object in the horizontal direction. Difference, so its accuracy is higher.
  • the aforementioned focusing method may further include: generating a depth value according to the defocus distance value.
  • the defocus distance value can calculate the image distance in the in-focus state, and the object distance can be obtained according to the image distance and the focal length.
  • the object distance is the depth value.
  • Fig. 22 is a structural block diagram of an apparatus for obtaining a phase difference according to an embodiment.
  • the phase difference acquisition device is applied to electronic equipment.
  • the electronic equipment includes an image sensor.
  • the image sensor includes a plurality of pixel point groups arranged in an array, and each of the pixel point groups includes an array array.
  • Each pixel corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2;
  • the phase difference acquisition device includes a scene detection module 2210 and a phase difference acquisition module 2212.
  • the scene detection module 2210 is used to perform scene detection on the captured image to obtain the scene type
  • the phase difference obtaining module 2212 is configured to obtain a phase difference value corresponding to the scene type through the image sensor, where the phase difference value is a phase difference value in a first direction or a phase difference value in a second direction; the first The direction forms a preset angle with the second direction.
  • the scene detection module 2210 is further configured to perform scene detection on the captured image through an artificial intelligence model to obtain the scene type, and the artificial intelligence model is obtained by training using sample images containing the scene type.
  • the scene detection module 2210 is further configured to detect the total number of edge points, the number of edge points in the first direction, and the number of edge points in the second direction in the scene of the captured image through an edge operator; The ratio of the number of points to the total number of edge points and the ratio of the number of edge points in the second direction to the total number of edge points determine the scene type of the captured image.
  • the phase difference obtaining module 2212 is further configured to obtain the phase difference value in the second direction through the image sensor when the scene type is a horizontal texture scene; when the scene type is a vertical texture scene, Then, the phase difference value in the first direction is obtained by the image sensor.
  • the phase difference acquisition module 2212 includes a brightness determination unit and a phase difference determination unit.
  • the brightness determining unit is configured to obtain a target brightness map according to the brightness value of the pixel points included in each pixel point group.
  • the phase difference determining unit is configured to perform segmentation processing on the target brightness map to obtain a first segmented brightness map and a second segmented brightness map, and according to the first segmented brightness map and the second segmented brightness map Determine the phase difference value of the mutually matched pixels in the brightness map to determine the phase difference value of the mutually matched pixels; determine the phase difference value in the first direction or the phase difference value in the second direction according to the phase difference value of the mutually matched pixels .
  • the brightness determining unit is further configured to, for each pixel point group, obtain the pixel point group according to the brightness value of the sub-pixel point at the same position of each pixel point in the pixel point group Corresponding sub-brightness map; generating the target brightness map according to the sub-brightness map corresponding to each pixel point group.
  • the brightness determining unit is further configured to determine the sub-pixel points at the same position from each of the pixel points to obtain a plurality of sub-pixel point sets, wherein the sub-pixel points included in each sub-pixel point set are The positions of the points in the pixel points are all the same; for each sub-pixel point set, according to the brightness value of each sub-pixel point in the sub-pixel point set, the brightness value corresponding to the sub-pixel point set is obtained; The brightness values corresponding to the sub-pixel sets generate the sub-brightness map.
  • the brightness determination unit is further configured to determine a color coefficient corresponding to each sub-pixel point in the sub-pixel point set, and the color coefficient is determined according to the color channel corresponding to the sub-pixel point; Multiply the color coefficient corresponding to each sub-pixel point in the point set by the brightness value to obtain the weighted brightness of each sub-pixel point in the sub-pixel point set; add the weighted brightness of each sub-pixel point in the sub-pixel point set To obtain the brightness value corresponding to the set of sub-pixel points.
  • each of the pixel points includes a plurality of sub-pixel points arranged in an array
  • the brightness determining unit is further configured to determine a target pixel point from each of the pixel point groups to obtain a plurality of the target pixel points; generate each of the target pixel points according to the brightness value of the sub-pixel points included in each target pixel point A sub-luminance map corresponding to the pixel point group; generating the target brightness map according to the sub-luminance map corresponding to each pixel point group.
  • the brightness determining unit is further configured to determine a pixel with a green color channel from each of the pixel point groups; and determine a pixel with a green color channel as the target pixel.
  • the brightness determining unit is further configured to determine the pixel points at the same position from each of the pixel point groups to obtain a plurality of pixel point sets, wherein the pixel points included in each pixel point set are The positions in the pixel point groups are all the same; according to the brightness values of the pixel points in the plurality of pixel point sets, generating a plurality of the target brightness maps corresponding to the plurality of pixel point sets one-to-one;
  • the phase difference determining unit is further configured to generate an intermediate phase difference map corresponding to the target brightness map according to the phase difference of the mutually matched pixels for each of the target brightness maps;
  • the intermediate phase difference map generates the phase difference value in the first direction and the phase difference value in the second direction.
  • the phase difference determining unit is further configured to determine pixels at the same position from each of the intermediate phase difference maps to obtain a plurality of phase difference pixel sets, wherein each of the phase difference pixel sets includes The positions of the pixels in the intermediate phase difference diagram are all the same;
  • phase difference pixel set For each phase difference pixel set, splicing pixels in the phase difference pixel set to obtain a sub-phase difference map corresponding to the phase difference pixel set;
  • the obtained multiple sub-phase difference maps are spliced to obtain a target phase difference map, and the target phase difference map includes a phase difference value in a first direction and a phase difference value in a second direction.
  • the phase difference determining unit is further configured to perform segmentation processing on the target brightness map to obtain multiple brightness map regions, each of the brightness map regions includes a row of pixels in the target brightness map, or , Each of the brightness map regions includes a column of pixels in the target brightness map;
  • the first brightness map region includes pixels in even rows of the target brightness map, or the first brightness map region
  • a luminance map area includes pixels in even-numbered columns in the target luminance map
  • the second luminance map area includes pixels in odd rows in the target luminance map
  • the second luminance map area includes the target luminance map Pixels in odd columns
  • the multiple first brightness map regions are used to form the first split brightness map, and the multiple second brightness map regions are used to form the second split brightness map.
  • the phase difference determining unit is further configured to determine the first neighboring pixel in each row of pixels included in the first divided brightness map when the brightness map area includes a row of pixels in the target brightness map.
  • a pixel set, the pixels included in the first adjacent pixel set correspond to the same pixel point group;
  • the phase difference determining unit is further configured to determine a second neighboring pixel in each column of pixels included in the first divided brightness map when the brightness map area includes a column of pixels in the target brightness map.
  • a pixel set, the pixels included in the second adjacent pixel set correspond to the same pixel point group;
  • each second adjacent pixel set and each second matched pixel set determine the phase difference between the second adjacent pixel set and the second matched pixel set corresponding to each other to obtain the first The phase difference of the direction.
  • the apparatus for acquiring the phase difference further includes a processing module and a control module.
  • the processing module is configured to determine the defocus distance value and the moving direction according to the phase difference value
  • the control module is used for controlling the movement of the lens to focus according to the defocus distance value and the moving direction.
  • the apparatus for acquiring the phase difference further includes a correction module.
  • the correction module is used for correcting the calculated phase difference value by using a gain diagram.
  • phase difference acquisition device can be divided into different modules as needed to complete all or part of the above-mentioned phase difference acquisition device.
  • FIG. 23 is a schematic diagram of the internal structure of an electronic device in an embodiment.
  • the electronic device includes a processor and a memory connected through a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the memory may include a non-volatile storage medium and internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the computer program can be executed by a processor to implement a focusing method provided in the following embodiments.
  • the internal memory provides a cached operating environment for the operating system computer program in the non-volatile storage medium.
  • the electronic device can be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
  • each module in the focusing device provided in the embodiment of the present application may be in the form of a computer program.
  • the computer program can be run on a terminal or a server.
  • the program module composed of the computer program can be stored in the memory of the terminal or the server.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • One or more non-volatile computer-readable storage media containing computer-executable instructions when the computer-executable instructions are executed by one or more processors, cause the processors to perform the operations of the phase difference acquisition method .
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM), which acts as external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Focusing (AREA)
  • Studio Devices (AREA)

Abstract

Embodiments of the present application relate to a phase difference obtaining method and apparatus, an electronic device, and a computer readable storage medium. The phase difference obtaining method comprises: performing scene detection on a captured image to obtain a scene type; and obtaining, by means of an image sensor, a phase difference value corresponding to the scene type, the phase difference valve being a phase difference value in a first direction or a phase difference value in a second direction, and the first direction and the second direction forming a preset angle.

Description

相位差的获取方法和装置、电子设备Method and device for obtaining phase difference, and electronic equipment
相关申请的交叉引用Cross-references to related applications
本申请要求于2019年11月12日提交中国专利局、申请号为2019111014227、发明名称为“相位差获取方法和装置、电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office, the application number is 2019111014227, and the invention title is "phase difference acquisition method and device, electronic equipment" on November 12, 2019, the entire content of which is incorporated herein by reference Applying.
技术领域Technical field
本申请涉及影像领域,特别是涉及一种相位差的获取方法和装置、电子设备、计算机可读存储介质。This application relates to the field of imaging, in particular to a method and device for obtaining phase difference, electronic equipment, and computer-readable storage media.
背景技术Background technique
随着电子设备技术的发展,越来越多的用户通过电子设备拍摄图像。为了保证拍摄的图像清晰,通常需要对电子设备的摄像模组进行对焦,即通过调节镜头与图像传感器之间的距离,以使拍摄对象在焦平面上。传统的对焦方式包括相位检测自动对焦(英文:phase detection auto focus;简称:PDAF)。With the development of electronic device technology, more and more users take images through electronic devices. In order to ensure that the captured image is clear, it is usually necessary to focus on the camera module of the electronic device, that is, by adjusting the distance between the lens and the image sensor, so that the subject is on the focal plane. Traditional focusing methods include phase detection auto focus (English: phase detection auto focus; referred to as PDAF).
传统的相位检测自动对焦,是在图像传感器包括的像素点中成对地设置相位检测像素点,其中,每个相位检测像素点对中的一个相位检测像素点进行左侧遮挡,另一个相位检测像素点进行右侧遮挡,这样,就可以将射向每个相位检测像素点对的成像光束分离成左右两个部分,通过对比左右两部分成像光束所成的像,即可得到相位差,在得到相位差后即可根据该相位差进行对焦,其中,相位差是指从不同方向射入的成像光线在成像位置上的差异。The traditional phase detection autofocus is to set phase detection pixel points in pairs in the pixel points included in the image sensor, where one phase detection pixel point of each phase detection pixel point pair is shielded on the left side, and the other phase detection pixel point The pixel points are shielded on the right side, so that the imaging beam directed at each phase detection pixel point pair can be separated into two parts, the left and right parts of the imaging beam, the phase difference can be obtained by comparing the images formed by the left and right parts of the imaging beam. After the phase difference is obtained, focusing can be performed according to the phase difference, where the phase difference refers to the difference in the imaging position of the imaging light incident from different directions.
然而,上述通过在图像传感器中设置相位检测像素点方式,相位差获取的准确度不高。However, the above method of setting phase detection pixels in the image sensor does not have high accuracy in obtaining the phase difference.
发明内容Summary of the invention
本申请实施例提供一种相位差的获取方法和装置、电子设备、计算机可读存储介质,可以提高相位差获取的准确度。The embodiments of the present application provide a method and device for acquiring a phase difference, electronic equipment, and a computer-readable storage medium, which can improve the accuracy of acquiring the phase difference.
一种相位差的获取方法,应用于电子设备,所述电子设备包括图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数;所述方法包括:A method for obtaining a phase difference is applied to an electronic device. The electronic device includes an image sensor. The image sensor includes a plurality of pixel point groups arranged in an array, and each of the pixel point groups includes M* arranged in an array. N pixels; each pixel corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2; the method includes:
对拍摄图像进行场景检测,得到场景类型;Perform scene detection on the captured image to obtain the scene type;
通过所述图像传感器获取与所述场景类型对应的相位差值,所述相位差值为第一方向的相位差值或第二方向的相位差值;所述第一方向与所述第二方向成预设角度。Obtain a phase difference value corresponding to the scene type through the image sensor, where the phase difference value is a phase difference value in a first direction or a phase difference value in a second direction; the first direction and the second direction Into a preset angle.
一种相位差的获取装置,应用于电子设备,所述电子设备包括图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数;所述装置包括:A phase difference acquisition device, applied to an electronic device, the electronic device includes an image sensor, the image sensor includes a plurality of pixel point groups arranged in an array, and each of the pixel point groups includes M* arranged in an array N pixels; each pixel corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2; the device includes:
场景检测模块,用于对拍摄图像进行场景检测,得到场景类型;The scene detection module is used to perform scene detection on the captured image to obtain the scene type;
相位差获取模块,用于通过所述图像传感器获取与所述场景类型对应的相位差值,所述相位差值为第一方向的相位差值或第二方向的相位差值;所述第一方向与所述第二方向成预设角度。A phase difference acquisition module, configured to acquire a phase difference value corresponding to the scene type through the image sensor, where the phase difference value is a phase difference value in a first direction or a phase difference value in a second direction; the first The direction forms a preset angle with the second direction.
一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,实现所述的方法的操作。An electronic device includes a memory and a processor, and a computer program is stored in the memory. When the computer program is executed by the processor, the operation of the method is realized.
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现所述的方法的操作。A computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the operation of the method is realized.
上述相位差的获取方法和装置、电子设备、计算机可读存储介质,通过对图像进行场景检测,得到场景类型,根据场景类型计算对应的相位差,该相位差值可为第一方向的相位差值或第二方向的相位差值,且第一方向和第二方向成预设角度,即能够准确的得到各像素的相位差,且通过场景检测后,只需计算对应的一个方向的相位差,不需要计算两个方向的相位差,大大的节省了计算的耗时,提高了计算速度,进一步提高了对焦的速度。The above-mentioned phase difference acquisition method and device, electronic equipment, and computer-readable storage medium are used to detect the scene of the image to obtain the scene type, and calculate the corresponding phase difference according to the scene type. The phase difference value may be the phase difference in the first direction. Value or the phase difference value in the second direction, and the first direction and the second direction are at a preset angle, that is, the phase difference of each pixel can be accurately obtained, and after scene detection, only the phase difference in one direction needs to be calculated , There is no need to calculate the phase difference between the two directions, which greatly saves the calculation time, improves the calculation speed, and further improves the focusing speed.
附图说明Description of the drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative work.
图1为一个实施例中相位检测自动对焦的原理示意图;FIG. 1 is a schematic diagram of the principle of phase detection autofocus in an embodiment;
图2为在图像传感器包括的像素点中成对地设置相位检测像素点的示意图;FIG. 2 is a schematic diagram of phase detection pixel points arranged in pairs in the pixel points included in the image sensor; FIG.
图3为一个实施例中图像传感器的部分结构示意图;FIG. 3 is a schematic diagram of a part of the structure of an image sensor in an embodiment;
图4为一个实施例中像素点的结构示意图;4 is a schematic diagram of the structure of pixels in an embodiment;
图5为一个实施例中成像设备的结构示意图;Figure 5 is a schematic structural diagram of an imaging device in an embodiment;
图6为一个实施例中像素点组上设置滤光片的示意图;Fig. 6 is a schematic diagram of a filter set on a pixel point group in an embodiment;
图7A为一个实施例中相位差的获取方法的流程图;FIG. 7A is a flowchart of a method for acquiring a phase difference in an embodiment;
图7B为一个实施例中水平纹理场景的示意图;Fig. 7B is a schematic diagram of a horizontal texture scene in an embodiment;
图7C为一个实施例中圆形场景的示意图;FIG. 7C is a schematic diagram of a circular scene in an embodiment;
图8为一个实施例中计算与场景类型对应的相位差的流程图;FIG. 8 is a flowchart of calculating the phase difference corresponding to the scene type in an embodiment;
图9为一个实施例中像素点组的示意图;Fig. 9 is a schematic diagram of a pixel point group in an embodiment;
图10为一个实施例中子亮度图的示意图;FIG. 10 is a schematic diagram of a sub-luminance map of an embodiment;
图11为一个实施例中获取目标亮度图的流程图;FIG. 11 is a flowchart of obtaining a target brightness map in an embodiment;
图12为一个实施例中根据像素点组中的目标像素点包括的子像素点的亮度值生成该像素点组对应的子亮度图的示意;FIG. 12 is a schematic diagram of generating a sub-brightness map corresponding to the pixel point group according to the brightness value of the sub-pixel points included in the target pixel point in the pixel point group in an embodiment;
图13为另一个实施例中获取目标亮度图的流程图;FIG. 13 is a flowchart of obtaining a target brightness map in another embodiment;
图14为一个实施例中从各个像素点组中确定相同位置处的像素点的示意图;FIG. 14 is a schematic diagram of determining pixel points at the same position from each pixel point group in an embodiment;
图15为一个实施例中从各个中间相位差图中确定相同位置处的像素的示意图;15 is a schematic diagram of determining pixels at the same position from each intermediate phase difference map in an embodiment;
图16为一个实施例中目标相位差图的示意图;FIG. 16 is a schematic diagram of a target phase difference diagram in an embodiment;
图17为一个实施例中对目标亮度图进行切分处理,得到第一切分亮度图和第二切分亮度图的方法的流程图;FIG. 17 is a flowchart of a method for performing segmentation processing on a target brightness map to obtain a first segmented brightness map and a second segmented brightness map in an embodiment;
图18为一个实施例中根据目标亮度图生成第一切分亮度图和第二切分亮度图的示意图;18 is a schematic diagram of generating a first segmented brightness map and a second segmented brightness map according to the target brightness map in an embodiment;
图19为另一个实施例中根据目标亮度图生成第一切分亮度图和第二切分亮度图的示意图;19 is a schematic diagram of generating a first segmented brightness map and a second segmented brightness map according to the target brightness map in another embodiment;
图20为一个实施例中根据第一切分亮度图和第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差的流程图;20 is a flowchart of determining the phase difference of mutually matched pixels according to the position difference of the pixels that match each other in the first split brightness map and the second split brightness map in an embodiment;
图21为另一个实施例中根据第一切分亮度图和第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差的流程图;21 is a flowchart of determining the phase difference of the pixels that match each other according to the position difference of the pixels that match each other in the first split brightness map and the second split brightness map in another embodiment;
图22为一个实施例中相位差的获取装置的结构框图;FIG. 22 is a structural block diagram of an apparatus for acquiring a phase difference in an embodiment;
图23为本申请实施例提供的一种计算机设备的框图。FIG. 23 is a block diagram of a computer device provided by an embodiment of this application.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solutions, and advantages of this application clearer, the following further describes this application in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the application, and not used to limit the application.
图1为相位检测自动对焦(phase detection auto focus,PDAF)的原理示意图。如图1所示,M1为成像设备处于合焦状态时,图像传感器所处的位置,其中,合焦状态指的是成功对焦的状态。当图像传感器位于M1位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在图像传感器上会聚,也即是,由物体W反射向镜头Lens的不同方向上的成像光线g在图像传感器上的同一位置处成像,此时,图像传感器成像清晰。Figure 1 is a schematic diagram of the principle of phase detection auto focus (PDAF). As shown in FIG. 1, M1 is the position of the image sensor when the imaging device is in the in-focus state, where the in-focus state refers to the state of successful focusing. When the image sensor is at the M1 position, the imaging light g reflected by the object W toward the lens Lens in different directions converges on the image sensor, that is, the imaging light g reflected by the object W toward the lens Lens in different directions is in the image The image is imaged at the same position on the sensor. At this time, the image of the image sensor is clear.
M2和M3为成像设备不处于合焦状态时,图像传感器所可能处于的位置,如图1所示,当图像传感器位于M2位置或M3位置时,由物体W反射向镜头Lens的不同方向上的成像光线g会在不同的位置成像。请参考图1,当图像传感器位于M2位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在位置A和位置B分别成像,当图像传感器位于M3位置时,由物体W反射向镜头Lens的不同方向上的成像光线g在位置C和位置D分别成像,此时,图像传感器成像不清晰。M2 and M3 are the possible positions of the image sensor when the imaging device is not in focus. As shown in Figure 1, when the image sensor is at the M2 position or the M3 position, the object W is reflected in different directions of the lens Lens. The imaging light g will be imaged at different positions. Please refer to Figure 1. When the image sensor is at the M2 position, the imaging light g reflected by the object W in different directions to the lens Lens is imaged at the position A and the position B respectively. When the image sensor is at the M3 position, the object W is reflected toward the lens. The imaging light g in different directions of the lens Lens is imaged at the position C and the position D respectively. At this time, the image of the image sensor is not clear.
在PDAF技术中,可以获取从不同方向射入镜头的成像光线在图像传感器中所成的像在位置上的差异,例如,如图1所示,可以获取位置A和位置B的差异,或者,获取位置C和位置D的差异;在获取到从不同方向射入镜头的成像光线在图像传感器中所成的像在位置上的差异之后,可以根据该差异以及摄像机中镜头与图像传感器之间的几何关系,得到离焦距离,所谓离焦距离指的是图像传感器当前所处的位置与合焦状态时图像传感器所应该处于的位置的距离;成像设备可以根据得到的离焦距离进行对焦。In PDAF technology, the difference in position of the image formed by the imaging light entering the lens from different directions in the image sensor can be obtained. For example, as shown in Figure 1, the difference between position A and position B can be obtained, or, Obtain the difference between position C and position D; after obtaining the difference in position of the image formed by the imaging light entering the lens from different directions in the image sensor, the difference and the difference between the lens and the image sensor in the camera The geometric relationship is used to obtain the defocus distance. The so-called defocus distance refers to the distance between the current position of the image sensor and the position where the image sensor should be in the in-focus state; the imaging device can focus according to the obtained defocus distance.
由此可知,合焦时,计算得到的PD值为0,反之算出的值越大,表示离合焦点的位置越远,值越小,表示离合焦点越近。采用PDAF对焦时,通过计算出PD值,再根据标定得到PD值与离焦距离之间的对应关系,可以求得离焦距离,然后根据离焦距离控制镜头移动达到合焦点,以实现对焦。It can be seen that when focusing, the calculated PD value is 0. On the contrary, the larger the calculated value, the farther the position of the clutch focal point is, and the smaller the value, the closer the clutch focal point. When using PDAF to focus, by calculating the PD value, and then obtaining the corresponding relationship between the PD value and the defocusing distance according to the calibration, the defocusing distance can be obtained, and then controlling the lens to move to the focal point according to the defocusing distance to achieve focusing.
相关技术中,可以在图像传感器包括的像素点中成对地设置一些相位检测像素点,如图2所示,图像传感器中可以设置有相位检测像素点对(以下称为像素点对)A,像素点对B和像素点对C。其中,在每个像素点对中,一个相位检测像素点进行左侧遮挡(英文:Left Shield),另一个相位检测像素点进行右侧遮挡(英文:Right Shield)。In the related art, some phase detection pixel points may be provided in pairs in the pixel points included in the image sensor. As shown in FIG. 2, a phase detection pixel point pair (hereinafter referred to as a pixel point pair) A may be provided in the image sensor. Pixel point pair B and pixel point pair C. Among them, in each pixel point pair, one phase detection pixel is subjected to left shielding (English: Left Shield), and the other phase detection pixel is subjected to right shielding (English: Right Shield).
对于进行了左侧遮挡的相位检测像素点而言,射向该相位检测像素点的成像光束中仅有右侧的光束才能在该相位检测像素点的感光部分(也即是未被遮挡的部分)上成像,对于进行了右侧遮挡的相位检测像素点而言,射向该相位检测像素点的成像光束中仅有左侧的光束才能在该相位检测像素点的感光部分(也即是未被遮挡的部分)上成像。这样,就可以将成像光束分为左右两个部分,通过对比左右两部分成像光束所成的像,即可得到相位差。For the phase detection pixel point that has been blocked on the left, only the right beam of the imaging beam directed to the phase detection pixel point can be in the photosensitive part of the phase detection pixel point (that is, the part that is not blocked). ). For the phase detection pixel that has been occluded on the right, only the left beam of the imaging beam directed at the phase detection pixel can be in the photosensitive part of the phase detection pixel (that is, not The occluded part) is imaged. In this way, the imaging beam can be divided into left and right parts, and the phase difference can be obtained by comparing the images formed by the left and right parts of the imaging beam.
然而,由于图像传感器中设置的相位检测像素点通常是左侧和右侧分别遮挡,因此,对于存在水平 纹理的场景,通过相位检测像素点无法计算得到PD值。例如拍摄场景为一条水平线,根据PD特性会得到左右两张图像,但无法计算出PD值。However, since the phase detection pixels set in the image sensor are usually blocked on the left and right sides respectively, for scenes with horizontal texture, the PD value cannot be calculated by the phase detection pixels. For example, the shooting scene is a horizontal line, and the left and right images will be obtained according to the PD characteristics, but the PD value cannot be calculated.
为了解决相位检测自动对焦针对一些水平纹理的场景无法计算得出PD值实现对焦的情况,本申请实施例中提供了一种成像组件,可以用来检测输出第一方向的相位差值和第二方向的相位差值,针对水平纹理场景,可采用第二方向的相位差值来实现对焦。In order to solve the situation that the phase detection autofocus cannot calculate the PD value to achieve focus for some horizontal texture scenes, an imaging component is provided in the embodiment of the application, which can be used to detect the phase difference value in the first direction and the second The phase difference value of the direction, for a horizontal texture scene, the phase difference value of the second direction can be used to achieve focusing.
在一个实施例中,本申请提供了一种成像组件。成像组件包括图像传感器。图像传感器可以为金属氧化物半导体元件(英文:Complementary Metal Oxide Semiconductor;简称:CMOS)图像传感器、电荷耦合元件(英文:Charge-coupled Device;简称:CCD)、量子薄膜传感器或者有机传感器等。In one embodiment, the present application provides an imaging assembly. The imaging component includes an image sensor. The image sensor may be a metal oxide semiconductor device (English: Complementary Metal Oxide Semiconductor; abbreviation: CMOS) image sensor, a charge-coupled device (English: Charge-coupled Device; abbreviation: CCD), a quantum thin film sensor, or an organic sensor.
图3为一个实施例中图像传感器的一部分的结构示意图。图像传感器包括阵列排布的多个像素点组Z,每个像素点组Z包括阵列排布的多个像素点D,每个像素点D对应一个感光单元。多个像素点包括M*N个像素点,其中,M和N均为大于或等于2的自然数。每个像素点D包括阵列排布的多个子像素点d。也就是每个感光单元可以由多个阵列排布的感光元件组成。其中,感光元件是一种能够将光信号转化为电信号的元件。参图3,每个像素点D中阵列排布的多个子像素点d上共同覆盖一个微透镜W。在一个实施例中,感光元件可为光电二极管。本实施例中,每个像素点组Z包括2*2阵列排布的4个像素点D,每个像素点D可包括2*2阵列排布的4个子像素点d。4个子像素点d上共同覆盖一个微透镜W。其中,每个像素点D包括2*2个光电二极管,2*2个光电二极管与2*2阵列排布的4个子像素点d对应设置。每个光电二极管用于接收光信号并进行光电转换,从而将光信号转换为电信号输出。每个像素点D所包括的4个子像素点d与同一颜色的滤光片对应设置,因此每个像素点D对应于一个颜色通道,比如红色R通道,或者绿色通道G,或者蓝色通道B。FIG. 3 is a schematic diagram of a part of the image sensor in an embodiment. The image sensor includes a plurality of pixel point groups Z arranged in an array, each pixel point group Z includes a plurality of pixel points D arranged in an array, and each pixel point D corresponds to a photosensitive unit. The multiple pixels include M*N pixels, where both M and N are natural numbers greater than or equal to 2. Each pixel point D includes a plurality of sub-pixel points d arranged in an array. That is, each photosensitive unit can be composed of a plurality of photosensitive elements arranged in an array. Among them, the photosensitive element is an element that can convert light signals into electrical signals. Referring to FIG. 3, a plurality of sub-pixel points d arranged in an array in each pixel point D collectively cover a microlens W. In one embodiment, the photosensitive element may be a photodiode. In this embodiment, each pixel point group Z includes 4 pixel points D arranged in a 2*2 array, and each pixel point D may include 4 sub-pixel points d arranged in a 2*2 array. The four sub-pixel points d jointly cover a microlens W. Among them, each pixel point D includes 2*2 photodiodes, and the 2*2 photodiodes are arranged correspondingly to the 4 sub-pixel points d arranged in a 2*2 array. Each photodiode is used to receive optical signals and perform photoelectric conversion, thereby converting the optical signals into electrical signals for output. The 4 sub-pixels d included in each pixel D are set corresponding to the same color filter, so each pixel D corresponds to a color channel, such as the red R channel, or the green channel G, or the blue channel B .
如图4所示,以每个像素点D包括子像素点1、子像素点2、子像素点3和子像素点4为例,可将子像素点1和子像素点2信号合并输出,子像素点3和子像素点4信号合并输出,从而构造成沿着第二方向(即竖直方向)的两个PD像素对,根据两个PD像素对的相位值可以确定像素点D内各子像素点沿第二方向的PD值(相位差值)。将子像素点1和子像素点3信号合并输出,子像素点2和子像素点4信号合并输出,从而构造沿着第一方向(即水平方向)的两个PD像素对,根据两个PD像素对的相位值可以确定像素点D内各子像素点沿第一方向的PD值(相位差值)。As shown in Figure 4, taking each pixel D including sub-pixel 1, sub-pixel 2, sub-pixel 3, and sub-pixel 4 as an example, the signals of sub-pixel 1 and sub-pixel 2 can be combined and output, and sub-pixel The signals of point 3 and sub-pixel point 4 are combined and output, thereby constructing two PD pixel pairs along the second direction (ie the vertical direction). According to the phase value of the two PD pixel pairs, each sub-pixel point in pixel point D can be determined The PD value (phase difference value) in the second direction. The signals of sub-pixel point 1 and sub-pixel point 3 are combined and output, and the signals of sub-pixel point 2 and sub-pixel point 4 are combined and output, thereby constructing two PD pixel pairs along the first direction (ie, horizontal direction). According to the two PD pixel pairs The phase value of can determine the PD value (phase difference value) of each sub-pixel point in the pixel point D along the first direction.
图5为一个实施例中成像设备的结构示意图。如图5所示,该成像设备包括透镜50、滤光片52和成像组件54。透镜50、滤光片52和成像组件54依次位于入射光路上,即透镜50设置在滤光片52之上,滤光片52设置在成像组件54上。Fig. 5 is a schematic structural diagram of an imaging device in an embodiment. As shown in FIG. 5, the imaging device includes a lens 50, a filter 52 and an imaging component 54. The lens 50, the filter 52 and the imaging component 54 are sequentially located on the incident light path, that is, the lens 50 is disposed on the filter 52, and the filter 52 is disposed on the imaging component 54.
成像组件54包括图3中的图像传感器。图像传感器包括阵列排布的多个像素点组Z,每个像素点组Z包括阵列排布的多个像素点D,每个像素点D对应一个感光单元,每个感光单元可以由多个阵列排布的感光元件组成。本实施例中,每个像素点D包括2*2阵列排布的4个子像素点d,每个子像素点d对应一个光点二极管542,即2*2个光电二极管542与2*2阵列排布的4个子像素点d对应设置。4个子像素点d共用一个透镜。The imaging component 54 includes the image sensor in FIG. 3. The image sensor includes a plurality of pixel point groups Z arranged in an array. Each pixel point group Z includes a plurality of pixel points D arranged in an array. Each pixel point D corresponds to a photosensitive unit, and each photosensitive unit can consist of multiple arrays. It is composed of arranged photosensitive elements. In this embodiment, each pixel point D includes 4 sub-pixel points d arranged in a 2*2 array, and each sub-pixel point d corresponds to a light spot diode 542, that is, 2*2 photodiodes 542 and 2*2 arrays. The 4 sub-pixel points d of the cloth are correspondingly arranged. The 4 sub-pixel points d share one lens.
滤光片52可包括红、绿、蓝三种,分别只能透过红色、绿色、蓝色对应波长的光线。一个像素点D所包括的4个子像素点d与同一颜色的滤光片对应设置。在其他实施例中,滤光片也可以是白色,方便较大光谱(波长)范围的光线通过,增加透过白色滤光片的光通量。The filter 52 can include three types of red, green, and blue, and can only transmit light of corresponding wavelengths of red, green, and blue, respectively. The four sub-pixel points d included in one pixel point D are arranged corresponding to the filters of the same color. In other embodiments, the filter may also be white, which facilitates the passage of light in a larger spectrum (wavelength) range and increases the luminous flux passing through the white filter.
透镜50用于接收入射光,并将入射光传输给滤光片52。滤光片52对入射光进行滤波处理后,将 滤波处理后的光以像素为基础入射到成像组件54上。The lens 50 is used to receive incident light and transmit the incident light to the filter 52. After the filter 52 performs filtering processing on the incident light, the filtered light is incident on the imaging component 54 on a pixel basis.
成像组件54包括的图像传感器中的感光单元通过光电效应将从滤光片52入射的光转换成电荷信号,并生成与电荷信号一致的像素信号。电荷信号与接收的光强度相一致。The photosensitive unit in the image sensor included in the imaging component 54 converts the light incident from the filter 52 into a charge signal through the photoelectric effect, and generates a pixel signal consistent with the charge signal. The charge signal is consistent with the received light intensity.
由上文说明可知,图像传感器包括的像素点与图像包括的像素是两个不同的概念,其中,图像包括的像素指的是图像的最小组成单元,其一般由一个数字序列进行表示,通常情况下,可以将该数字序列称为像素的像素值。本申请实施例对“图像传感器包括的像素点”以及“图像包括的像素”两个概念均有所涉及,为了方便读者理解,在此进行简要的解释。It can be seen from the above description that the pixels included in the image sensor and the pixels included in the image are two different concepts. The pixels included in the image refer to the smallest component unit of the image, which is generally represented by a sequence of numbers. In the following, the sequence of numbers can be referred to as the pixel value of a pixel. The embodiments of the present application involve both concepts of "pixels included in an image sensor" and "pixels included in an image". To facilitate readers' understanding, a brief explanation is provided here.
图6为一个实施例中像素点组上设置滤光片的示意图。像素点组Z包括按照两行两列的阵列排布方式进行排布的4个像素点D,其中,第一行第一列的像素点的颜色通道为绿色,也即是,第一行第一列的像素点上设置的滤光片为绿色滤光片;第一行第二列的像素点的颜色通道为红色,也即是,第一行第二列的像素点上设置的滤光片为红色滤光片;第二行第一列的像素点的颜色通道为蓝色,也即是,第二行第一列的像素点上设置的滤光片为蓝色滤光片;第二行第二列的像素点的颜色通道为绿色,也即是,第二行第二列的像素点上设置的滤光片为绿色滤光片。Fig. 6 is a schematic diagram of a filter set on a pixel point group in an embodiment. The pixel group Z includes 4 pixels D arranged in an array arrangement of two rows and two columns, wherein the color channel of the pixels in the first row and the first column is green, that is, the first row and the first row The filter set on the pixels in one column is a green filter; the color channel of the pixels in the first row and second column is red, that is, the filter set on the pixels in the first row and second column The filter is a red filter; the color channel of the pixel in the second row and the first column is blue, that is, the filter set on the pixel in the second row and the first column is a blue filter; The color channel of the pixel points in the second row and second column is green, that is, the filter set on the pixel points in the second row and second column is a green filter.
图7A为一个实施例中相位差的获取方法的流程图。本实施例中的相位差的获取方法,以运行于图5中的成像设备上为例进行描述。如图7所示,该对焦方法包括操作702至操作706。Fig. 7A is a flowchart of a method for acquiring a phase difference in an embodiment. The method for acquiring the phase difference in this embodiment is described by taking the imaging device in FIG. 5 as an example. As shown in FIG. 7, the focusing method includes operation 702 to operation 706.
操作702,对拍摄图像进行场景检测,得到场景类型。In operation 702, scene detection is performed on the captured image to obtain the scene type.
具体地,通过电子设备的成像设备拍摄图像时,所拍摄的图像包含场景信息,不同的拍摄对象,场景信息可能不同。例如拍摄某一长方体盒子,水平方向上是一条直线,如图7B所示,则无法计算水平方向上的相位差值,需要计算竖直方向上的相位差值,来方便后续实现对焦。若拍摄的是一个篮球,水平方向上并非直线,如图7C所示,则可以计算水平方向上的相位差值,不需要计算竖直方向上的相位差值,也可以方便后续实现对焦,则该场景不需要计算竖直方向的相位差值,节省计算所消耗的时间。Specifically, when an image is captured by an imaging device of an electronic device, the captured image contains scene information, and different shooting objects may have different scene information. For example, when shooting a cuboid box, the horizontal direction is a straight line, as shown in FIG. 7B, the phase difference value in the horizontal direction cannot be calculated, and the phase difference value in the vertical direction needs to be calculated to facilitate subsequent focusing. If you are shooting a basketball and the horizontal direction is not a straight line, as shown in Figure 7C, you can calculate the phase difference value in the horizontal direction. It is not necessary to calculate the phase difference value in the vertical direction, and it can also facilitate subsequent focusing. This scenario does not need to calculate the phase difference value in the vertical direction, which saves the time consumed in the calculation.
可以理解的是,可通过人工智能模型或边缘算子检测拍摄图像的场景,得到场景类型。It is understandable that an artificial intelligence model or an edge operator can be used to detect the scene of the captured image to obtain the scene type.
场景类型可包括水平纹理场景、竖直纹理场景、圆形纹理场景等等。The scene types may include horizontal texture scenes, vertical texture scenes, circular texture scenes, and so on.
操作704,通过图像传感器获取与该场景类型对应的相位差值,该相位差值为第一方向的相位差值或第二方向的相位差值;该第一方向与所述第二方向成预设角度。Operation 704: Obtain a phase difference value corresponding to the scene type through an image sensor, where the phase difference value is a phase difference value in a first direction or a phase difference value in a second direction; Set the angle.
具体地,与场景类型对应的相位差值是指能够针对该场景类型,用来进行对焦的相位差值,例如水平纹理场景,无法计算出水平方向上的相位差值,为了准确的对焦,需要通过上述包括阵列排布的M*N个像素点的图像传感器采集数据,然后计算出竖直方向上的相位差值。第一方向和第二方向可成预设角度,该预设角度可为除0度、180度和360度外的任意角度。本实施例中,第一方向可为水平方向,第二方向可为竖直方向。Specifically, the phase difference value corresponding to the scene type refers to the phase difference value that can be used for focusing for the scene type. For example, in a horizontal texture scene, the phase difference value in the horizontal direction cannot be calculated. In order to accurately focus, it is necessary Data is collected by the above-mentioned image sensor including M*N pixel points arranged in an array, and then the phase difference value in the vertical direction is calculated. The first direction and the second direction may form a preset angle, and the preset angle may be any angle other than 0 degrees, 180 degrees, and 360 degrees. In this embodiment, the first direction may be a horizontal direction, and the second direction may be a vertical direction.
上述相位差的获取方法,对图像进行场景检测,得到场景类型,根据场景类型计算对应的相位差,该相位差值可为第一方向的相位差值或第二方向的相位差值,且第一方向和第二方向成预设角度,即能够准确的得到各像素的相位差,且通过场景检测后,只需计算对应的一个方向的相位差,不需要计算两个方向的相位差,大大的节省了计算的耗时,提高了计算速度,进一步提高了对焦的速度。In the above phase difference acquisition method, scene detection is performed on the image to obtain the scene type, and the corresponding phase difference is calculated according to the scene type. The phase difference value can be the phase difference value in the first direction or the phase difference value in the second direction, and the first One direction and the second direction are at a preset angle, that is, the phase difference of each pixel can be accurately obtained, and after the scene detection, only the phase difference of the corresponding one direction needs to be calculated, and the phase difference of the two directions is not required to be calculated. This saves calculation time, increases the calculation speed, and further improves the speed of focusing.
在一个实施例中,该对拍摄图像进行场景检测,得到场景类型,包括:通过人工智能模型对拍摄图像进行场景检测,得到所述场景类型,该人工智能模型为采用包含场景类型的样本图像训练得到的。In one embodiment, performing scene detection on the captured image to obtain the scene type includes: performing scene detection on the captured image through an artificial intelligence model to obtain the scene type, and the artificial intelligence model is trained by using a sample image containing the scene type owned.
具体地,可以预先采集包含场景类型的样本图像,然后对人工智能模型进行训练,得到可以检测出 不同场景类型的人工智能模型。电子设备中存储训练好的人工智能模型,在拍摄时对拍摄图像进行场景检测,得到场景类型。人工智能模型可以高效、准确的检测出场景类型。Specifically, sample images containing scene types can be collected in advance, and then the artificial intelligence model can be trained to obtain artificial intelligence models that can detect different scene types. The trained artificial intelligence model is stored in the electronic device, and scene detection is performed on the captured image during shooting to obtain the scene type. The artificial intelligence model can efficiently and accurately detect the scene type.
在一个实施例中,所述对拍摄图像进行场景检测,得到场景类型,包括:通过边缘算子检测拍摄图像的场景中的总边缘点数量、第一方向边缘点数量和第二方向边缘点数量;根据所述第一方向边缘点占总边缘点数量的比值,以及所述第二方向边缘点数量占所述总边缘点数量的比值,确定所述拍摄图像的场景类型。In one embodiment, the scene detection performed on the captured image to obtain the scene type includes: detecting the total number of edge points in the scene of the captured image, the number of edge points in the first direction, and the number of edge points in the second direction through an edge operator Determine the scene type of the captured image according to the ratio of the number of edge points in the first direction to the total number of edge points, and the ratio of the number of edge points in the second direction to the total number of edge points.
具体地,边缘算子可根据实际情况配置。边缘算子有离散梯度算子、Roberts算子、Laplacian算子、gradient算子和Sobel算子等。Sobel的水平方向的边缘算子为
Figure PCTCN2020120847-appb-000001
竖直方向的边缘算子可为
Figure PCTCN2020120847-appb-000002
Specifically, the edge operator can be configured according to actual conditions. The edge operators include discrete gradient operator, Roberts operator, Laplacian operator, gradient operator and Sobel operator. Sobel's horizontal edge operator is
Figure PCTCN2020120847-appb-000001
The vertical edge operator can be
Figure PCTCN2020120847-appb-000002
可统计得出拍摄图像的场景中的总边缘点数量,第一方向边缘点数量和第二方向边缘点数量,当第一方向边缘点占总边缘点数量的比值超过阈值时,则表明该场景为水平纹理场景,当第二方向边缘点数量占所述总边缘点数量的比值超过阈值时,则表明该场景为竖直纹理场景。当第一方向边缘点占总边缘点数量的比值超过阈值,且第二方向边缘点数量占所述总边缘点数量的比值超过阈值,则表明该场景包含水平纹理场景和竖直纹理场景,针对水平纹理场景,计算竖直方向的PD值,针对竖直纹理场景,计算水平方向的PD值。The total number of edge points in the scene of the captured image, the number of edge points in the first direction and the number of edge points in the second direction can be counted. When the ratio of edge points in the first direction to the total number of edge points exceeds the threshold, it indicates the scene It is a horizontal texture scene, and when the ratio of the number of edge points in the second direction to the total number of edge points exceeds a threshold, it indicates that the scene is a vertical texture scene. When the ratio of edge points in the first direction to the total number of edge points exceeds the threshold, and the ratio of the number of edge points in the second direction to the total number of edge points exceeds the threshold, it indicates that the scene includes a horizontal texture scene and a vertical texture scene. For horizontal texture scenes, calculate the PD value in the vertical direction, and for vertical texture scenes, calculate the PD value in the horizontal direction.
通过边缘算子进行场景检测,可以快速的检测出场景类型。Through the edge operator for scene detection, the scene type can be quickly detected.
在一个实施例中,计算与该场景类型对应的相位差值,包括:当所述场景类型为水平纹理场景,则通过该图像传感器获取第二方向的相位差值;当所述场景类型为竖直纹理场景,则通过该图像传感器获取第一方向的相位差值。In one embodiment, calculating the phase difference value corresponding to the scene type includes: when the scene type is a horizontal texture scene, acquiring the phase difference value in the second direction through the image sensor; when the scene type is vertical For straight texture scenes, the phase difference value in the first direction is obtained through the image sensor.
在一个实施例中,上述方法还包括:根据所述相位差值确定离焦距离值;根据所述离焦距离值控制镜头移动以对焦。In an embodiment, the above method further includes: determining a defocus distance value according to the phase difference value; and controlling the lens to move to focus according to the defocus distance value.
相位差值与离焦距离值之间的对应关系可通过标定得到。The corresponding relationship between the phase difference value and the defocus distance value can be obtained through calibration.
离焦距离值与相位差值之间的对应关系如下:The correspondence between the defocus distance value and the phase difference value is as follows:
defocus=PD*slope(DCC),其中,DCC(Defocus Conversion Coefficient,离焦系数)由标定得到,defocus为离焦距离值,slope为斜率函数;PD为相位差值。defocus=PD*slope(DCC), where DCC (Defocus Conversion Coefficient) is obtained by calibration, defocus is the defocus distance value, slope is the slope function; PD is the phase difference value.
相位差值与离焦距离值的对应关系的标定过程包括:将摄像模组的有效对焦行程切分为10等分,即(近焦DAC-远焦DAC)/10,以此覆盖马达的对焦范围;在每个对焦DAC(DAC可为0至1023)位置进行对焦,并记录当前对焦DAC位置的相位差;完成马达对焦行程后取一组10个的对焦DAC与获得的PD值进行做比;生成10个相近的比值K,将DAC与PD组成的二维数据进行拟合得到斜率为K的直线。The calibration process of the corresponding relationship between the phase difference value and the defocus distance value includes: dividing the effective focus stroke of the camera module into 10 equal parts, namely (near focus DAC-far focus DAC)/10, so as to cover the focus of the motor Range; focus at each focus DAC (DAC can be 0 to 1023) position, and record the phase difference of the current focus DAC position; after completing the motor focus stroke, take a group of 10 focus DACs and compare the obtained PD value ; Generate 10 similar ratios K, fit the two-dimensional data composed of DAC and PD to obtain a straight line with slope K.
根据离焦距离值的正负可以确定移动方向。The direction of movement can be determined according to the positive or negative value of the defocus distance.
在对焦过程中,通过场景检测,可以快速的获取第一方向的相位差值或第二方向的相位差值,并根 据第一方向的相位差值或第二方向的相位差值确定离焦距离值,根据离焦距离值控制镜头移动,实现了相位检测自动对焦,因根据场景可以输出第一方向的相位差值或第二方向的相位差值,可以有效的针对水平纹理场景或竖直纹理场景利用相位差值进行对焦,提高了对焦的准确度和稳定度,且只需要计算一个方向的相位差值,可以节省计算相位差值的时间,提高对焦速度,可以适用于移动物体的对焦。During the focusing process, through scene detection, the phase difference value in the first direction or the phase difference value in the second direction can be quickly obtained, and the defocus distance can be determined according to the phase difference value in the first direction or the phase difference value in the second direction According to the value of the defocus distance, the lens movement is controlled to realize the phase detection autofocus. Because the phase difference value in the first direction or the phase difference value in the second direction can be output according to the scene, it can effectively target horizontal or vertical texture scenes. The scene uses the phase difference value to focus, which improves the accuracy and stability of focusing, and only needs to calculate the phase difference value in one direction, which can save the time of calculating the phase difference value and increase the focusing speed, which can be applied to the focusing of moving objects.
在一个实施例中,上述方法还包括:对所述计算得到的相位差值采用增益图进行校正。In an embodiment, the above method further includes: correcting the calculated phase difference value by using a gain map.
具体地,增益图可为预先标定得出的。该增益图中包含了各个像素对应的PD值的增益系数,将计算得到的PD值乘以对应的增益系数,得到校正后的PD值。通过校正PD值,使得计算得到的PD值更加准确。Specifically, the gain map can be pre-calibrated. The gain graph contains the gain coefficient of the PD value corresponding to each pixel, and the calculated PD value is multiplied by the corresponding gain coefficient to obtain the corrected PD value. By correcting the PD value, the calculated PD value is more accurate.
通常,获取相位差值可以采用频率域算法和空间域算法。其中,频率域算法是利用傅立叶位移的特性计算,将采集的目标亮度图利用傅立叶转换从空间域转换到频率域,然后再计算phase correlation,当correlation算出最大值时(peak),表示其有最大位移,此时再做反傅立叶就可知在空间域最大位移是多少。空间域算法是指找出特征点,例如边缘特征,DoG(difference of Gaussian),Harris角点等特征,再利用这些特征点计算位移。Generally, the frequency domain algorithm and the space domain algorithm can be used to obtain the phase difference value. Among them, the frequency domain algorithm is to use the characteristics of Fourier displacement to calculate, the collected target brightness map is converted from the spatial domain to the frequency domain using the Fourier transform, and then the phase correlation is calculated. When the correlation calculates the maximum value (peak), it means that it has the maximum value. Displacement, at this time, do the inverse Fourier to know the maximum displacement in the space domain. The spatial domain algorithm is to find out the feature points, such as edge features, DoG (difference of Gaussian), Harris corner points and other features, and then use these feature points to calculate the displacement.
图8为一个实施例中获取相位差的流程图。如图8所示,该获取相位差,包括:Fig. 8 is a flow chart of obtaining the phase difference in an embodiment. As shown in Figure 8, the acquisition of the phase difference includes:
操作802,根据每个所述像素点组包括的像素点的亮度值获取目标亮度图。In operation 802, a target brightness map is obtained according to the brightness values of the pixel points included in each of the pixel point groups.
通常情况下,图像传感器的像素点的亮度值可以由该像素点包括的子像素点的亮度值来进行表征。成像设备可以根据每个像素点组包括的像素点中子像素点的亮度值来获取该目标亮度图。其中,子像素点的亮度值是指该子像素点对应的感光元件接收到的光信号的亮度值。Generally, the brightness value of the pixel of the image sensor can be characterized by the brightness value of the sub-pixel included in the pixel. The imaging device may obtain the target brightness map according to the brightness values of the sub-pixel points in the pixel points included in each pixel point group. Wherein, the brightness value of a sub-pixel point refers to the brightness value of the light signal received by the photosensitive element corresponding to the sub-pixel point.
如上文所述,图像传感器包括的子像素点是一种能够将光信号转化为电信号的感光元件,因此,可以根据子像素点输出的电信号来获取该子像素点接收到的光信号的强度,根据子像素点接收到的光信号的强度即可得到该子像素点的亮度值。As mentioned above, the sub-pixel included in the image sensor is a photosensitive element that can convert light signals into electrical signals. Therefore, the light signal received by the sub-pixel can be obtained according to the electrical signal output by the sub-pixel. Intensity, the brightness value of the sub-pixel can be obtained according to the intensity of the light signal received by the sub-pixel.
本申请实施例中的目标亮度图用于反映图像传感器中子像素点的亮度值,该目标亮度图可以包括多个像素,其中,目标亮度图中的每个像素的像素值均是根据图像传感器中子像素点的亮度值得到的。The target brightness map in the embodiment of the present application is used to reflect the brightness value of the sub-pixels in the image sensor. The target brightness map may include multiple pixels, wherein the pixel value of each pixel in the target brightness map is based on the image sensor Obtained from the brightness value of the neutron pixel.
操作804,对所述目标亮度图进行切分处理,得到第一切分亮度图和第二切分亮度图,并根据所述第一切分亮度图和所述第二切分亮度图中相互匹配的像素点的位置差异,确定所述相互匹配的像素点的相位差值。Operation 804: Perform segmentation processing on the target brightness map to obtain a first segmented brightness map and a second segmented brightness map, and compare the first segmented brightness map and the second segmented brightness map to each other. The position difference of the matched pixel points determines the phase difference value of the mutually matched pixel points.
在一个实施例中,成像设备可以沿列的方向(图像坐标系中的y轴方向)对该目标亮度图进行切分处理,在沿列的方向对目标亮度图进行切分处理的过程中,切分处理的每一分割线都与列的方向竖直。In one embodiment, the imaging device may perform segmentation processing on the target luminance map along the column direction (the y-axis direction in the image coordinate system). In the process of segmenting the target luminance map along the column direction, Each dividing line of the segmentation process is vertical to the direction of the column.
在另一个实施例中,成像设备可以沿行的方向(图像坐标系中的x轴方向)对该目标亮度图进行切分处理,在沿行的方向对目标亮度图进行切分处理的过程中,切分处理的每一分割线都与行的方向竖直。In another embodiment, the imaging device may perform segmentation processing on the target brightness map along the row direction (the x-axis direction in the image coordinate system), and in the process of segmenting the target brightness map along the row direction , Each dividing line of the segmentation process is vertical to the direction of the row.
沿列的方向对目标亮度图进行切分处理后得到的第一切分亮度图和第二切分亮度图可以分别称为上图和下图。沿行的方向对目标亮度图进行切分处理后得到的第一切分亮度图和第二切分亮度图可以分别称为左图和右图。The first segmented brightness map and the second segmented brightness map obtained after the target brightness map is segmented along the column direction can be referred to as the upper image and the lower image, respectively. The first segmented brightness map and the second segmented brightness map obtained after the target brightness map is segmented along the row direction can be called the left image and the right image, respectively.
其中,“相互匹配的像素”指的是由像素本身及其周围像素组成的像素矩阵相互相似。例如,第一切分亮度图中像素a和其周围的像素组成一个3行3列的像素矩阵,该像素矩阵的像素值为:Among them, "mutually matched pixels" means that the pixel matrix composed of the pixel itself and the surrounding pixels are similar to each other. For example, the pixel a and its surrounding pixels in the first segmented brightness map form a pixel matrix with 3 rows and 3 columns, and the pixel value of the pixel matrix is:
2 15 702 15 70
1 35 601 35 60
0 100 10 100 1
第二切分亮度图中像素b和其周围的像素也组成一个3行3列的像素矩阵,该像素矩阵的像素值为:The pixel b and the surrounding pixels in the second segmented brightness map also form a pixel matrix with 3 rows and 3 columns, and the pixel value of the pixel matrix is:
1 15 701 15 70
1 36 601 36 60
0 100 20 100 2
由上文可以看出,这两个矩阵是相似的,则可以认为像素a和像素b相互匹配。判断像素矩阵是否相似的方式很多,通常可对两个像素矩阵中的每个对应像素的像素值求差,再将求得的差值的绝对值进行相加,利用该相加的结果来判断像素矩阵是否相似,也即是,若该相加的结果小于预设的某一阈值,则认为像素矩阵相似,否则,则认为像素矩阵不相似。It can be seen from the above that the two matrices are similar, and it can be considered that the pixel a and the pixel b match each other. There are many ways to judge whether the pixel matrix is similar. Usually, the pixel value of each corresponding pixel in the two pixel matrices can be calculated, and then the absolute value of the difference obtained is added, and the result of the addition is used to determine Whether the pixel matrix is similar, that is, if the result of the addition is less than a preset threshold, the pixel matrix is considered to be similar; otherwise, the pixel matrix is considered to be dissimilar.
例如,对于上述两个3行3列的像素矩阵而言,可以分别将1和2求差,将15和15求差,将70和70求差,……,再将求得的差的绝对值相加,得到相加结果为3,该相加结果3小于预设的阈值,则认为上述两个3行3列的像素矩阵相似。For example, for the above two pixel matrices with 3 rows and 3 columns, the difference of 1 and 2, the difference of 15 and 15, the difference of 70 and 70, ..., and then the absolute difference Values are added, and the result of the addition is 3. If the result of the addition of 3 is less than the preset threshold, it is considered that the two pixel matrices with 3 rows and 3 columns are similar.
另一种判断像素矩阵是否相似的方式是利用sobel卷积核计算方式或者高拉普拉斯计算方式等方式提取其边缘特征,通过边缘特征来判断像素矩阵是否相似。Another way to judge whether the pixel matrix is similar is to use the Sobel convolution kernel calculation method or the high Laplacian calculation method to extract the edge characteristics, and judge whether the pixel matrix is similar by the edge characteristics.
在本申请实施例中,“相互匹配的像素的位置差异”指的是,相互匹配的像素中位于第一切分亮度图中的像素的位置和位于第二切分亮度图中的像素的位置的差异。如上述举例,相互匹配的像素a和像素b的位置差异指的是像素a在第一切分亮度图中的位置和像素b在第二切分亮度图中的位置的差异。In the embodiments of the present application, "the positional difference of pixels that match each other" refers to the positions of the pixels in the first split brightness map and the positions of the pixels in the second split brightness map among the matched pixels. The difference. As in the above example, the position difference between the pixel a and the pixel b that are matched with each other refers to the difference between the position of the pixel a in the first split brightness map and the position of the pixel b in the second split brightness map.
相互匹配的像素分别对应于从不同方向射入镜头的成像光线在图像传感器中所成的不同的像。例如,第一切分亮度图中的像素a与第二切分亮度图中的像素b相互匹配,其中,该像素a可以对应于图1中在A位置处所成的像,像素b可以对应于图1中在B位置处所成的像。The pixels that match each other correspond to different images in the image sensor formed by the imaging light entering the lens from different directions. For example, the pixel a in the first split brightness map and the pixel b in the second split brightness map match each other, where the pixel a may correspond to the image formed at position A in FIG. 1, and the pixel b may correspond to The image formed at position B in Figure 1.
由于相互匹配的像素分别对应于从不同方向射入镜头的成像光线在图像传感器中所成的不同的像,因此,根据相互匹配的像素的位置差异,即可确定该相互匹配的像素的相位差。Since the matched pixels correspond to the different images in the image sensor formed by the imaging light entering the lens from different directions, the phase difference of the matched pixels can be determined according to the position difference of the matched pixels. .
操作806,根据所述相互匹配的像素的相位差值确定第一方向的相位差值或第二方向的相位差值。Operation 806: Determine the phase difference value in the first direction or the phase difference value in the second direction according to the phase difference values of the mutually matched pixels.
当第一切分亮度图包括的是偶数行的像素,第二切分亮度图包括的是奇数行的像素,第一切分亮度图中的像素a与第二切分亮度图中的像素b相互匹配,则根据相互匹配的像素a和像素b的相位差,可以确定第一方向的相位差值。When the first split brightness map includes even-numbered rows of pixels, the second split brightness map includes odd-numbered rows, pixel a in the first split brightness map and pixel b in the second split brightness map Mutual matching, the phase difference value in the first direction can be determined according to the phase difference of the pixel a and the pixel b that are matched with each other.
当第一切分亮度图包括的是偶数列的像素,第二切分亮度图包括的是奇数列的像素,第一切分亮度图中的像素a与第二切分亮度图中的像素b相互匹配,则根据相互匹配的像素a和像素b的相位差,可以确定第二方向的相位差值。When the first split brightness map includes even-numbered columns, the second split brightness map includes odd-numbered columns, pixel a in the first split brightness map and pixel b in the second split brightness map Mutual matching, based on the phase difference between the matched pixel a and the pixel b, the phase difference value in the second direction can be determined.
上述像素点组中的像素点的亮度值得到目标亮度图,将目标亮度图划分为两个切分亮度图后,通过像素匹配,可以快速的确定相互匹配的像素的相位差值,同时包含了丰富的相位差值,可以提高相位差值得精确度,提高对焦的准确度和稳定度。The brightness value of the pixel points in the above pixel point group obtains the target brightness map. After the target brightness map is divided into two segmented brightness maps, through pixel matching, the phase difference value of the matching pixels can be quickly determined, and the phase difference value of the matching pixels can be quickly determined. The rich phase difference value can improve the accuracy of the phase difference value and improve the accuracy and stability of the focus.
在一个实施例中,每个所述像素点包括阵列排布的多个子像素点,所述根据每个所述像素点组包括的像素点的亮度值获取目标亮度图,包括:对于每个所述像素点组,根据所述像素点组中每个像素点的相同位置处的子像素点的亮度值,获取所述像素点组对应的子亮度图;根据每个所述像素点组对应的子亮度图生成该目标亮度图。In an embodiment, each of the pixel points includes a plurality of sub-pixel points arranged in an array, and obtaining a target brightness map according to the brightness value of the pixel points included in each pixel point group includes: According to the pixel point group, according to the brightness value of the sub-pixel point at the same position of each pixel point in the pixel point group, the sub-brightness map corresponding to the pixel point group is obtained; The sub-luminance map generates the target luminance map.
其中,每个像素点的相同位置处的子像素点指的是在各像素点中排布位置相同的子像素点。Wherein, the sub-pixel points at the same position of each pixel point refer to the sub-pixel points that are arranged in the same position in each pixel point.
图9为一个实施例中的像素点组的示意图,如图9所示,该像素点组包括按照两行两列的阵列排布方式进行排布的4个像素点,该4个像素点分别为D1像素点、D2像素点、D3像素点和D4像素点,其中,每个像素点包括按照两行两列的阵列排布方式进行排布的4个子像素点,其中,子像素点分别为d11、d12、d13、d14、d21、d22、d23、d24、d31、d32、d33、d34、d41、d42、d43和d44。Fig. 9 is a schematic diagram of a pixel point group in an embodiment. As shown in Fig. 9, the pixel point group includes 4 pixels arranged in an array arrangement of two rows and two columns. D1 pixel, D2 pixel, D3 pixel and D4 pixel, where each pixel includes 4 sub-pixels arranged in an array arrangement of two rows and two columns, where the sub-pixels are respectively d11, d12, d13, d14, d21, d22, d23, d24, d31, d32, d33, d34, d41, d42, d43, and d44.
如图9所示,子像素点d11、d21、d31和d41在各像素点中的排布位置相同,均为第一行第一列,子像素点d12、d22、d32和d42在各像素点中的排布位置相同,均为第一行第二列,子像素点d13、d23、d33和d43在各像素点中的排布位置相同,均为第二行第一列,子像素点d14、d24、d34和d44在各像素点中的排布位置相同,均为第二行第二列。As shown in Figure 9, the sub-pixels d11, d21, d31, and d41 are arranged in the same position in each pixel, and they are all in the first row and first column. The sub-pixels d12, d22, d32, and d42 are in each pixel. The arrangement positions in are the same in the first row and second column, and the sub-pixels d13, d23, d33, and d43 are arranged in the same position in each pixel, and they are all in the second row and first column, and the sub-pixel d14 , D24, d34 and d44 are arranged in the same position in each pixel, and they are all in the second row and second column.
在一个实施例中,根据所述像素点组中每个像素点的相同位置处的子像素点的亮度值,获取所述像素点组对应的子亮度图,包括可以包括操作A1至A3。In an embodiment, obtaining the sub-brightness map corresponding to the pixel point group according to the brightness value of the sub-pixel point at the same position of each pixel point in the pixel point group may include operations A1 to A3.
操作A1,成像设备从每个像素点中确定相同位置处的子像素点,得到多个子像素点集合。In operation A1, the imaging device determines a sub-pixel point at the same position from each pixel point to obtain a plurality of sub-pixel point sets.
其中,每个子像素点集合包括的子像素点在像素点中的位置均相同。Wherein, the positions of the sub-pixels included in each sub-pixel set are the same in the pixel points.
成像设备分别从D1像素点、D2像素点、D3像素点和D4像素点中确定相同位置处的子像素点,可以得到4个子像素点集合J1、J2、J3和J4,其中,子像素点集合J1包括子像素点d11、d21、d31和d41,其包括的子像素点在像素点中的位置均相同,为第一行第一列,子像素点集合J2包括子像素点d12、d22、d32和d42,其包括的子像素点在像素点中的位置均相同,为第一行第二列,子像素点集合J3包括子像素点d13、d23、d33和d43,其包括的子像素点在像素点中的位置均相同,为第二行第一列,子像素点集合J4包括子像素点d14、d24、d34和d44,其包括的子像素点在像素点中的位置均相同,为第二行第二列。The imaging device determines the sub-pixels at the same position from D1 pixel, D2 pixel, D3 pixel and D4 pixel respectively, and can obtain 4 sub-pixel sets J1, J2, J3, and J4, among which, the sub-pixel set J1 includes sub-pixels d11, d21, d31, and d41. The positions of the sub-pixels included in the pixel are all the same in the first row and first column. The sub-pixel set J2 includes sub-pixels d12, d22, and d32. And d42, the positions of the sub-pixels included in it are the same in the first row and second column. The sub-pixel set J3 includes sub-pixels d13, d23, d33 and d43, and the sub-pixels included are The positions of the pixels are all the same, which is the second row and the first column. The sub-pixel set J4 includes sub-pixels d14, d24, d34, and d44. The positions of the sub-pixels included in the pixel are all the same. Two rows and second column.
操作A2,对于每个子像素点集合,成像设备根据该子像素点集合中每个子像素点的亮度值,获取该子像素点集合对应的亮度值。Operation A2, for each sub-pixel point set, the imaging device obtains the brightness value corresponding to the sub-pixel point set according to the brightness value of each sub-pixel point in the sub-pixel point set.
可选的,在操作A2中,成像设备可以确定子像素点集合中每个子像素点对应的颜色系数,其中,该颜色系数是根据子像素点对应的颜色通道确定的。Optionally, in operation A2, the imaging device may determine a color coefficient corresponding to each sub-pixel point in the sub-pixel point set, where the color coefficient is determined according to the color channel corresponding to the sub-pixel point.
例如,子像素点d11属于D1像素点,该D1像素点包括的滤光片可以为绿色滤光片,也即是,该D1像素点的颜色通道为绿色,则其包括的子像素点d11的颜色通道也为绿色,成像设备可以根据子像素点d11的颜色通道(绿色)确定该子像素点d11对应的颜色系数。For example, the sub-pixel point d11 belongs to the D1 pixel point, and the filter included in the D1 pixel point may be a green filter, that is, if the color channel of the D1 pixel point is green, then the sub-pixel point d11 included The color channel is also green, and the imaging device can determine the color coefficient corresponding to the sub-pixel d11 according to the color channel (green) of the sub-pixel d11.
在确定了子像素点集合中每个子像素点对应的颜色系数之后,成像设备可以将子像素点集合中每个子像素点对应的颜色系数与亮度值相乘,得到子像素点集合中每个子像素点的加权亮度值。After determining the color coefficient corresponding to each sub-pixel in the sub-pixel set, the imaging device can multiply the color coefficient corresponding to each sub-pixel in the sub-pixel set by the brightness value to obtain each sub-pixel in the sub-pixel set The weighted brightness value of the point.
例如,成像设备可以将子像素点d11的亮度值与子像素点d11对应的颜色系数相乘,得到该子像素点d11的加权亮度值。For example, the imaging device may multiply the brightness value of the sub-pixel point d11 by the color coefficient corresponding to the sub-pixel point d11 to obtain the weighted brightness value of the sub-pixel point d11.
在得到子像素点集合中每个子像素点的加权亮度值之后,成像设备可以将子像素点集合中每个子像素点的加权亮度值相加,得到该子像素点集合对应的亮度值。After obtaining the weighted brightness value of each sub-pixel point in the sub-pixel point set, the imaging device may add the weighted brightness value of each sub-pixel point in the sub-pixel point set to obtain the brightness value corresponding to the sub-pixel point set.
例如,对于子像素点集合J1,可以基于下述第一公式计算该子像素点集合J1对应的亮度值。For example, for the sub-pixel point set J1, the brightness value corresponding to the sub-pixel point set J1 can be calculated based on the following first formula.
Y_TL=Y_21*C_R+(Y_11+Y_41)*C_G/2+Y_31*C_B。Y_TL=Y_21*C_R+(Y_11+Y_41)*C_G/2+Y_31*C_B.
其中,Y_TL为该子像素点集合J1对应的亮度值,Y_21为子像素点d21的亮度值,Y_11为子像素点d11的亮度值,Y_41为子像素点d41的亮度值,Y_31为子像素点d31的亮度值,C_R为子像素点d21对应的颜色系数,C_G/2为子像素点d11和d41对应的颜色系数,C_B为子像素点d31对应的颜色系数, 其中,Y_21*C_R为子像素点d21的加权亮度值,Y_11*C_G/2为子像素点d11的加权亮度值,Y_41*C_G/2为子像素点d41的加权亮度值,Y_31*C_B为子像素点d31的加权亮度值。Among them, Y_TL is the brightness value corresponding to the sub-pixel point set J1, Y_21 is the brightness value of the sub-pixel point d21, Y_11 is the brightness value of the sub-pixel point d11, Y_41 is the brightness value of the sub-pixel point d41, and Y_31 is the sub-pixel point The brightness value of d31, C_R is the color coefficient corresponding to the sub-pixel point d21, C_G/2 is the color coefficient corresponding to the sub-pixel point d11 and d41, and C_B is the color coefficient corresponding to the sub-pixel point d31, where Y_21*C_R is the sub-pixel The weighted brightness value of the point d21, Y_11*C_G/2 is the weighted brightness value of the sub-pixel point d11, Y_41*C_G/2 is the weighted brightness value of the sub-pixel point d41, and Y_31*C_B is the weighted brightness value of the sub-pixel point d31.
对于子像素点集合J2,可以基于下述第二公式计算该子像素点集合J2对应的亮度值。For the sub-pixel point set J2, the brightness value corresponding to the sub-pixel point set J2 can be calculated based on the following second formula.
Y_TR=Y_22*C_R+(Y_12+Y_42)*C_G/2+Y_32*C_B。Y_TR=Y_22*C_R+(Y_12+Y_42)*C_G/2+Y_32*C_B.
其中,Y_TR为该子像素点集合J2对应的亮度值,Y_22为子像素点d22的亮度值,Y_12为子像素点d12的亮度值,Y_42为子像素点d42的亮度值,Y_32为子像素点d32的亮度值,C_R为子像素点d22对应的颜色系数,C_G/2为子像素点d12和d42对应的颜色系数,C_B为子像素点d32对应的颜色系数,其中,Y_22*C_R为子像素点d22的加权亮度值,Y_12*C_G/2为子像素点d12的加权亮度值,Y_42*C_G/2为子像素点d42的加权亮度值,Y_32*C_B为子像素点d32的加权亮度值。Among them, Y_TR is the brightness value corresponding to the sub-pixel point set J2, Y_22 is the brightness value of the sub-pixel point d22, Y_12 is the brightness value of the sub-pixel point d12, Y_42 is the brightness value of the sub-pixel point d42, and Y_32 is the sub-pixel point The brightness value of d32, C_R is the color coefficient corresponding to sub-pixel point d22, C_G/2 is the color coefficient corresponding to sub-pixel point d12 and d42, C_B is the color coefficient corresponding to sub-pixel point d32, where Y_22*C_R is the sub-pixel The weighted brightness value of the point d22, Y_12*C_G/2 is the weighted brightness value of the sub-pixel point d12, Y_42*C_G/2 is the weighted brightness value of the sub-pixel point d42, and Y_32*C_B is the weighted brightness value of the sub-pixel point d32.
对于子像素点集合J3,可以基于下述第三公式计算该子像素点集合J3对应的亮度值。For the sub-pixel point set J3, the brightness value corresponding to the sub-pixel point set J3 can be calculated based on the following third formula.
Y_BL=Y_23*C_R+(Y_13+Y_43)*C_G/2+Y_33*C_B。Y_BL=Y_23*C_R+(Y_13+Y_43)*C_G/2+Y_33*C_B.
其中,Y_BL为该子像素点集合J3对应的亮度值,Y_23为子像素点d23的亮度值,Y_13为子像素点d13的亮度值,Y_43为子像素点d43的亮度值,Y_33为子像素点d33的亮度值,C_R为子像素点d23对应的颜色系数,C_G/2为子像素点d13和d43对应的颜色系数,C_B为子像素点d33对应的颜色系数,其中,Y_23*C_R为子像素点d23的加权亮度值,Y_13*C_G/2为子像素点d13的加权亮度值,Y_43*C_G/2为子像素点d43的加权亮度值,Y_33*C_B为子像素点d33的加权亮度值。Among them, Y_BL is the brightness value corresponding to the sub-pixel point set J3, Y_23 is the brightness value of the sub-pixel point d23, Y_13 is the brightness value of the sub-pixel point d13, Y_43 is the brightness value of the sub-pixel point d43, and Y_33 is the sub-pixel point The brightness value of d33, C_R is the color coefficient corresponding to the sub-pixel point d23, C_G/2 is the color coefficient corresponding to the sub-pixel point d13 and d43, and C_B is the color coefficient corresponding to the sub-pixel point d33, where Y_23*C_R is the sub-pixel The weighted brightness value of the point d23, Y_13*C_G/2 is the weighted brightness value of the sub-pixel point d13, Y_43*C_G/2 is the weighted brightness value of the sub-pixel point d43, and Y_33*C_B is the weighted brightness value of the sub-pixel point d33.
对于子像素点集合J4,可以基于下述第四公式计算该子像素点集合J4对应的亮度值。For the sub-pixel point set J4, the brightness value corresponding to the sub-pixel point set J4 can be calculated based on the following fourth formula.
Y_BR=Y_24*C_R+(Y_14+Y_44)*C_G/2+Y_34*C_B。Y_BR=Y_24*C_R+(Y_14+Y_44)*C_G/2+Y_34*C_B.
其中,Y_BR为该子像素点集合J4对应的亮度值,Y_24为子像素点d24的亮度值,Y_14为子像素点d14的亮度值,Y_44为子像素点d44的亮度值,Y_34为子像素点d34的亮度值,C_R为子像素点d24对应的颜色系数,C_G/2为子像素点d14和d44对应的颜色系数,C_B为子像素点d34对应的颜色系数,其中,Y_24*C_R为子像素点d24的加权亮度值,Y_14*C_G/2为子像素点d14的加权亮度值,Y_44*C_G/2为子像素点d44的加权亮度值,Y_34*C_B为子像素点d34的加权亮度值。Among them, Y_BR is the brightness value corresponding to the sub-pixel point set J4, Y_24 is the brightness value of the sub-pixel point d24, Y_14 is the brightness value of the sub-pixel point d14, Y_44 is the brightness value of the sub-pixel point d44, and Y_34 is the sub-pixel point The brightness value of d34, C_R is the color coefficient corresponding to sub-pixel point d24, C_G/2 is the color coefficient corresponding to sub-pixel points d14 and d44, C_B is the color coefficient corresponding to sub-pixel point d34, where Y_24*C_R is the sub-pixel The weighted brightness value of the point d24, Y_14*C_G/2 is the weighted brightness value of the sub-pixel point d14, Y_44*C_G/2 is the weighted brightness value of the sub-pixel point d44, and Y_34*C_B is the weighted brightness value of the sub-pixel point d34.
操作A3,成像设备根据每个子像素集合对应的亮度值生成子亮度图。In operation A3, the imaging device generates a sub-brightness map according to the brightness value corresponding to each sub-pixel set.
其中,子亮度图包括多个像素,该子亮度图中每个像素与一个子像素集合相对应,每个像素的像素值等于对应的子像素集合所对应的亮度值。The sub-luminance map includes a plurality of pixels, each pixel in the sub-luminance map corresponds to a sub-pixel set, and the pixel value of each pixel is equal to the brightness value corresponding to the corresponding sub-pixel set.
图10为一个实施例中子亮度图的示意图。如图10所示,该子亮度图包括4个像素,其中,第一行第一列的像素与子像素集合J1相对应,其像素值为Y_TL,第一行第二列的像素与子像素集合J2相对应,其像素值为Y_TR,第二行第一列的像素与子像素集合J3相对应,其像素值为Y_BL,第二行第二列的像素与子像素集合J4相对应,其像素值为Y_BR。Fig. 10 is a schematic diagram of a sub-luminance map in an embodiment. As shown in Fig. 10, the sub-luminance map includes 4 pixels. Among them, the pixels in the first row and the first column correspond to the sub-pixel set J1, and the pixel value is Y_TL, and the pixels in the first row and the second column correspond to the sub-pixels. Set J2 corresponds to the pixel value Y_TR, the pixel in the second row and first column corresponds to the sub-pixel set J3, and its pixel value is Y_BL, and the pixel in the second row and second column corresponds to the sub-pixel set J4, which The pixel value is Y_BR.
图11为一个实施例中获取目标亮度图的流程图。如图11所示,该获取目标亮度图的方式可以包括以下操作:Fig. 11 is a flowchart of obtaining a target brightness map in an embodiment. As shown in FIG. 11, the method of obtaining the target brightness map may include the following operations:
操作1102,从每个像素点组中确定目标像素点,得到多个目标像素点。In operation 1102, a target pixel is determined from each pixel group to obtain multiple target pixels.
像素点组可以包括阵列排布的多个像素点,成像设备可以从每个像素点组包括的多个像素点中确定一个目标像素点,从而得到多个目标像素点。The pixel point group may include a plurality of pixel points arranged in an array, and the imaging device may determine a target pixel point from the plurality of pixel points included in each pixel point group, so as to obtain a plurality of target pixel points.
可选的,成像设备可以从每个像素点组中确定颜色通道为绿色的像素点(也即是包括的滤光片为绿色滤光片的像素点),而后,将该颜色通道为绿色的像素点确定为目标像素点。Optionally, the imaging device may determine a pixel with a green color channel from each pixel group (that is, a pixel with a green filter included), and then, the color channel is a green The pixel is determined as the target pixel.
由于颜色通道为绿色的像素点感光性能较好,因此,将像素点组中颜色通道为绿色的像素点确定为目标像素点,在后续操作中根据该目标像素点生成的目标亮度图质量较高。Since the pixels with the green color channel have better light sensitivity, the pixels with the green color channel in the pixel group are determined as the target pixel, and the target brightness map generated according to the target pixel in the subsequent operation is of higher quality .
操作1104,根据每个目标像素点包括的子像素点的亮度值生成每个像素点组对应的子亮度图。In operation 1104, a sub-brightness map corresponding to each pixel point group is generated according to the brightness value of the sub-pixel points included in each target pixel point.
其中,每个像素点组对应的子亮度图包括多个像素,每个像素点组对应的子亮度图中每个像素与该像素点组中目标像素点包括的一个子像素点相对应,每个像素点组对应的子亮度图中每个像素的像素值为对应的子像素点的亮度值。Wherein, the sub-luminance map corresponding to each pixel point group includes a plurality of pixels, and each pixel in the sub-luminance map corresponding to each pixel point group corresponds to a sub-pixel point included in the target pixel point in the pixel point group. The pixel value of each pixel in the sub-luminance map corresponding to each pixel point group is the brightness value of the corresponding sub-pixel point.
图12为一个实施例中根据像素点组Z1中的目标像素点DM包括的子像素点的亮度值生成该像素点组Z1对应的子亮度图L的示意。FIG. 12 is a schematic diagram of generating the sub-luminance map L corresponding to the pixel point group Z1 according to the brightness values of the sub-pixel points included in the target pixel point DM in the pixel point group Z1 in an embodiment.
如图12所示,该子亮度图L包括4个像素,其中,每个像素与目标像素点DM包括的一个子像素点相对应,且,每个像素的像素值为对应的子像素点的亮度值,其中,该子亮度图L中第一行第一列的像素与目标像素点DM包括的第一行第一列的子像素点对应,该子亮度图L中第一行第一列的像素的像素值Gr_TL为目标像素点DM包括的第一行第一列的子像素点的亮度值Gr_TL,该子亮度图L中第一行第二列的像素与目标像素点DM包括的第一行第二列的子像素点对应,该子亮度图L中第一行第二列的像素的像素值Gr_TR为目标像素点DM包括的第一行第二列的子像素点的亮度值Gr_TR,该子亮度图L中第二行第一列的像素与目标像素点DM包括的第二行第一列的子像素点对应,该子亮度图L中第二行第一列的像素的像素值Gr_BL为目标像素点DM包括的第二行第一列的子像素点的亮度值Gr_BL,该子亮度图L中第二行第二列的像素与目标像素点DM包括的第二行第二列的子像素点对应,该子亮度图L中第二行第二列的像素的像素值Gr_BR为目标像素点DM包括的第二行第二列的子像素点的亮度值Gr_BR。As shown in FIG. 12, the sub-luminance map L includes 4 pixels, where each pixel corresponds to a sub-pixel included in the target pixel DM, and the pixel value of each pixel is the value of the corresponding sub-pixel. The brightness value, where the pixels in the first row and first column of the sub-luminance map L correspond to the sub-pixels in the first row and first column included in the target pixel point DM, and the first row and first column in the sub-luminance map L The pixel value Gr_TL of the pixel in the target pixel DM includes the brightness value Gr_TL of the sub-pixel in the first row and the first column of the target pixel DM, and the pixel in the first row and second column in the sub-luminance map L and the pixel in the first row and second column of the target pixel DM include the first The sub-pixels in one row and the second column correspond, and the pixel value Gr_TR of the pixel in the first row and second column in the sub-luminance map L is the brightness value Gr_TR of the sub-pixel in the first row and second column included in the target pixel DM , The pixels in the second row and first column of the sub-luminance map L correspond to the sub-pixels in the second row and first column included in the target pixel point DM, and the pixels in the second row and first column of the sub-luminance map L The value Gr_BL is the brightness value Gr_BL of the sub-pixels in the second row and the first column of the target pixel DM. The pixels in the second row and the second column of the sub-brightness map L and the target pixel DM include the second row of the second pixel. The sub-pixels in the column correspond to the pixel value Gr_BR of the pixel in the second row and second column of the sub-brightness map L as the brightness value Gr_BR of the sub-pixel in the second row and second column included in the target pixel DM.
操作1106,根据每个像素点组对应的子亮度图生成目标亮度图。In operation 1106, a target brightness map is generated according to the sub-brightness map corresponding to each pixel point group.
成像设备可以按照图像传感器中各个像素点组的阵列排布方式,对各个像素点组对应的子亮度图进行拼接,得到目标亮度图。The imaging device can splice the sub-luminance maps corresponding to each pixel point group according to the array arrangement of each pixel point group in the image sensor to obtain the target luminance map.
图13为另一个实施例中获取目标亮度图的流程图。如图13所示,该获取目标亮度图的方式可以包括以下操作:Fig. 13 is a flowchart of obtaining a target brightness map in another embodiment. As shown in FIG. 13, the method of obtaining the target brightness map may include the following operations:
操作1302,从每个像素点组中确定相同位置处的像素点,得到多个像素点集合。In operation 1302, a pixel point at the same position is determined from each pixel point group to obtain a plurality of pixel point sets.
其中,每个像素点集合包括的像素点在像素点组中的位置均相同。Wherein, the positions of the pixels included in each pixel point set in the pixel point group are all the same.
如图14所示,成像设备分别从像素点组Z1、像素点组Z2、像素点组Z3和像素点组Z4中确定相同位置处的像素点,可以得到4个像素点集合P1、P2、P3和P4,其中,像素点集合P1包括像素点D11、D21、D31和D41,其包括的像素点在像素点组中的位置均相同,为第一行第一列,像素点集合P2包括像素点D12、D22、D32和D42,其包括的像素点在像素点组中的位置均相同,为第一行第二列,像素点集合P3包括子像素点D13、D23、D33和D43,其包括的像素点在像素点组中的位置均相同,为第二行第一列,像素点集合P4包括像素点D14、D24、D34和D44,其包括的像素点在像素点组中的位置均相同,为第二行第二列。As shown in Figure 14, the imaging device determines the pixel points at the same position from the pixel point group Z1, the pixel point group Z2, the pixel point group Z3, and the pixel point group Z4, and four pixel point sets P1, P2, P3 can be obtained. And P4, where the pixel point set P1 includes pixels D11, D21, D31, and D41, the positions of the pixels included in the pixel point group are all the same in the first row and the first column, and the pixel point set P2 includes pixels D12, D22, D32 and D42, the positions of the pixels included in the pixel group are all the same in the first row and second column. The pixel set P3 includes sub-pixels D13, D23, D33 and D43, which include The positions of the pixels in the pixel group are the same, which is the second row and the first column. The pixel group P4 includes the pixels D14, D24, D34 and D44, and the positions of the pixels included in the pixel group are all the same. For the second row and second column.
操作1304,成像设备根据多个像素点集合中像素点的亮度值,生成与多个像素点集合一一对应的多个目标亮度图。In operation 1304, the imaging device generates a plurality of target brightness maps corresponding to the plurality of pixel point sets one-to-one according to the brightness values of the pixels in the plurality of pixel point sets.
如上文所述,图像传感器的像素点的亮度值可以由该像素点包括的子像素点的亮度值来进行表征,因此,对于每个像素点集合,成像设备可以根据该像素点集合中每个像素点包括的每个子像素点的亮度值生成该像素点集合对应的目标亮度图。As mentioned above, the brightness value of the pixel of the image sensor can be characterized by the brightness value of the sub-pixels included in the pixel. Therefore, for each pixel set, the imaging device can be based on each pixel in the pixel set. The brightness value of each sub-pixel included in the pixel generates a target brightness map corresponding to the set of pixel points.
其中,与某一像素点集合对应的目标亮度图包括多个像素,该目标亮度图中的每个像素与该像素点集合包括的像素点的一个子像素点相对应,该目标亮度图中的每个像素的像素值为对应的子像素点的亮度值。Wherein, the target brightness map corresponding to a certain pixel point set includes a plurality of pixels, each pixel in the target brightness map corresponds to a sub-pixel point of the pixel points included in the pixel point set, and the target brightness map The pixel value of each pixel is the brightness value of the corresponding sub-pixel.
在图11的获取目标亮度图的方式中,成像设备从每个像素点组中确定一个像素点(也即是目标像素点),并根据确定出的像素点生成目标亮度图,换句话说,在第二种获取目标亮度图的方式中,成像设备根据每个像素点组中的一个像素点生成了一幅目标亮度图。In the method of obtaining the target brightness map in FIG. 11, the imaging device determines a pixel point (that is, the target pixel point) from each pixel point group, and generates the target brightness map according to the determined pixel point, in other words, In the second method of acquiring the target brightness map, the imaging device generates a target brightness map according to one pixel in each pixel point group.
而在图13的获取目标亮度图的方式中,成像设备根据每个像素点组中的一个像素点生成一幅目标亮度图,并根据每个像素点组中的另一个像素点生成另一幅目标亮度图,同时根据每个像素点组中的又一个像素点生成又一幅目标亮度图,以此类推。其中,在图13的获取目标亮度图的方式中,成像设备获取到的目标亮度图的个数与像素点组包括的像素点的个数相同。In the method of obtaining the target brightness map in FIG. 13, the imaging device generates a target brightness map according to one pixel in each pixel point group, and generates another one according to another pixel point in each pixel point group. The target brightness map, and at the same time, another target brightness map is generated according to another pixel in each pixel point group, and so on. Wherein, in the method of acquiring the target brightness map in FIG. 13, the number of target brightness maps acquired by the imaging device is the same as the number of pixel points included in the pixel point group.
在得到多个目标亮度图后,对于每个目标亮度图,摄像设备对该目标亮度图进行切分处理,并根据切分处理结果得到第一切分亮度图和第二切分亮度图,对于每个目标亮度图对应的第一切分亮度图和第二切分亮度图,根据第一切分亮度图和第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差。After obtaining multiple target brightness maps, for each target brightness map, the imaging device performs segmentation processing on the target brightness map, and obtains the first segmented brightness map and the second segmented brightness map according to the segmentation processing results. The first segment brightness map and the second segment brightness map corresponding to each target brightness map, according to the position difference of the matching pixels in the first segment brightness map and the second segment brightness map, determine the matching pixels Phase difference.
对于每一个目标亮度图,成像设备都可以根据该目标亮度图对应的第一切分亮度图和第二切分亮度图中相互匹配的像素的相位差,得到中间相位差图,接着,成像设备可以根据每个目标亮度图对应的中间相位差图得到目标相位差图。这样,获取到目标相位差图精度较高,在像素点组包括4个像素点的情况下,通过这种方式获取到的目标相位差图的精度是上述第二种目标亮度图的获取方式获取到的目标相位差图的精度的4倍。For each target brightness map, the imaging device can obtain the intermediate phase difference map according to the phase difference of the matching pixels in the first segment brightness map and the second segment brightness map corresponding to the target brightness map, and then the imaging device The target phase difference map can be obtained according to the intermediate phase difference map corresponding to each target brightness map. In this way, the accuracy of obtaining the target phase difference map is relatively high. In the case that the pixel point group includes 4 pixels, the accuracy of the target phase difference map obtained in this way is obtained by the second method of obtaining the target brightness map. 4 times the accuracy of the target phase difference map.
下面,本申请实施例将对根据每个目标亮度图对应的中间相位差图得到目标相位差图的技术过程进行说明,该技术过程可以包括操作B1至操作B3。Hereinafter, the embodiment of the present application will describe the technical process of obtaining the target phase difference map according to the intermediate phase difference map corresponding to each target brightness map, and the technical process may include operation B1 to operation B3.
操作B1,成像设备从每个中间相位差图中确定相同位置处的像素,得到多个相位差像素集合。In operation B1, the imaging device determines pixels at the same position from each intermediate phase difference map to obtain a plurality of phase difference pixel sets.
其中,每个相位差像素集合包括的像素在中间相位差图中的位置均相同。Wherein, the positions of the pixels included in each phase difference pixel set in the intermediate phase difference map are all the same.
参照图15,成像设备分别从中间相位差图1、中间相位差图2、中间相位差图3和中间相位差图4中确定相同位置处的像素,可以得到4个相位差像素集合Y1、Y2、Y3和Y4,其中,相位差像素集合Y1包括中间相位差图1中的像素PD_Gr_1、中间相位差图2中的像素PD_R_1、中间相位差图3中的像素PD_B_1和中间相位差图4中的像素PD_Gb_1,相位差像素集合Y2包括中间相位差图1中的像素PD_Gr_2、中间相位差图2中的像素PD_R_2、中间相位差图3中的像素PD_B_2和中间相位差图4中的像素PD_Gb_2,相位差像素集合Y3包括中间相位差图1中的像素PD_Gr_3、中间相位差图2中的像素PD_R_3、中间相位差图3中的像素PD_B_3和中间相位差图4中的像素PD_Gb_3,相位差像素集合Y4包括中间相位差图1中的像素PD_Gr_4、中间相位差图2中的像素PD_R_4、中间相位差图3中的像素PD_B_4和中间相位差图4中的像素PD_Gb_4。15, the imaging device determines the pixels at the same position from the intermediate phase difference diagram 1, the intermediate phase difference diagram 2, the intermediate phase difference diagram 3, and the intermediate phase difference diagram 4 respectively, and can obtain 4 phase difference pixel sets Y1, Y2 , Y3, and Y4, where the phase difference pixel set Y1 includes the pixel PD_Gr_1 in the intermediate phase difference diagram 1, the pixel PD_R_1 in the intermediate phase difference diagram 2, the pixel PD_B_1 in the intermediate phase difference diagram 3, and the pixel PD_B_1 in the intermediate phase difference diagram 4 The pixel PD_Gb_1, the phase difference pixel set Y2 includes the pixel PD_Gr_2 in the intermediate phase difference diagram 1, the pixel PD_R_2 in the intermediate phase difference diagram 2, the pixel PD_B_2 in the intermediate phase difference diagram 3, and the pixel PD_Gb_2 in the intermediate phase difference diagram 4. The difference pixel set Y3 includes the pixel PD_Gr_3 in the intermediate phase difference diagram 1, the pixel PD_R_3 in the intermediate phase difference diagram 2, the pixel PD_B_3 in the intermediate phase difference diagram 3, and the pixel PD_Gb_3 in the intermediate phase difference diagram 4, and the phase difference pixel set Y4 It includes the pixel PD_Gr_4 in the intermediate phase difference diagram 1, the pixel PD_R_4 in the intermediate phase difference diagram 2, the pixel PD_B_4 in the intermediate phase difference diagram 3, and the pixel PD_Gb_4 in the intermediate phase difference diagram 4.
操作B2,对于每个相位差像素集合,成像设备将相位差像素集合中的像素进行拼接,得到与相位差像素集合对应的子相位差图。In operation B2, for each phase difference pixel set, the imaging device stitches the pixels in the phase difference pixel set to obtain a sub-phase difference map corresponding to the phase difference pixel set.
其中,该子相位差图包括多个像素,每个像素与相位差像素集合中的一个像素相对应,每个像素的像素值等于与其对应的像素的像素值。Wherein, the sub-phase difference map includes a plurality of pixels, each pixel corresponds to a pixel in the phase difference pixel set, and the pixel value of each pixel is equal to the pixel value of the corresponding pixel.
操作B3,成像设备将得到的多个子相位差图进行拼接得到目标相位差图。In operation B3, the imaging device stitches the obtained multiple sub-phase difference maps to obtain the target phase difference map.
参考图16,图16为目标相位差图的示意图,其中,该目标相位差图包括子相位差图1、子相位差图2、子相位差图3和子相位差图4,其中,子相位差图1与相位差像素集合Y1相对应,子相位差图2与相位差像素集合Y2相对应,子相位差图3与相位差像素集合Y3相对应,子相位差图4与相位差像素集合Y4相对应。Referring to FIG. 16, FIG. 16 is a schematic diagram of a target phase difference diagram, where the target phase difference diagram includes subphase difference diagram 1, subphase difference diagram 2, subphase difference diagram 3, and subphase difference diagram 4, where the subphase difference diagram Figure 1 corresponds to the phase difference pixel set Y1, the sub phase difference diagram 2 corresponds to the phase difference pixel set Y2, the sub phase difference diagram 3 corresponds to the phase difference pixel set Y3, and the sub phase difference diagram 4 corresponds to the phase difference pixel set Y4 Corresponding.
图17为一个实施例中对目标亮度图进行切分处理,得到第一切分亮度图和第二切分亮度图的方式的流程图,可以应用于图3所示的成像设备中,如图17所示,该方式可以包括以下操作:FIG. 17 is a flowchart of a method of performing segmentation processing on the target brightness map to obtain the first segmented brightness map and the second segmented brightness map in an embodiment, which can be applied to the imaging device shown in FIG. 3, as shown in FIG. As shown in 17, this method can include the following operations:
操作1702,对目标亮度图进行切分处理,得到多个亮度图区域。In operation 1702, segmentation processing is performed on the target luminance map to obtain multiple luminance map regions.
其中,每个亮度图区域包括目标亮度图中的一行像素,或者,每个亮度图区域包括目标亮度图中的一列像素。Wherein, each brightness map area includes a row of pixels in the target brightness map, or each brightness map area includes a column of pixels in the target brightness map.
可选地,成像设备可以沿行的方向对目标亮度图进行逐列切分,得到目标亮度图的多个像素列(也即是上文所述的亮度图区域)。Optionally, the imaging device may segment the target brightness map column by column along the row direction to obtain multiple pixel columns of the target brightness map (that is, the brightness map area described above).
可选地,成像设备可以沿列的方向对目标亮度图进行逐行切分,得到目标亮度图的多个像素行(也即是上文所述的亮度图区域)。Optionally, the imaging device may segment the target brightness map row by row along the column direction to obtain multiple pixel rows of the target brightness map (that is, the brightness map area described above).
操作1704,从多个亮度图区域中获取多个第一亮度图区域和多个第二亮度图区域。In operation 1704, a plurality of first brightness map regions and a plurality of second brightness map regions are obtained from the plurality of brightness map regions.
其中,第一亮度图区域包括目标亮度图中偶数行的像素,或者,第一亮度图区域包括目标亮度图中偶数列的像素。Wherein, the first luminance map area includes pixels in even-numbered rows in the target luminance map, or the first luminance map area includes pixels in even-numbered columns in the target luminance map.
第二亮度图区域包括目标亮度图中奇数行的像素,或者,第二亮度图区域包括目标亮度图中奇数列的像素。The second brightness map area includes pixels in odd rows in the target brightness map, or the second brightness map area includes pixels in odd columns in the target brightness map.
换句话说,在对目标亮度图进行逐列切分的情况下,成像设备可以将偶数列确定为第一亮度图区域,将奇数列确定为第二亮度图区域。In other words, in the case of segmenting the target luminance map column by column, the imaging device may determine the even-numbered columns as the first luminance map area and the odd-numbered columns as the second luminance map area.
在对目标亮度图进行逐行切分的情况下,成像设备可以将偶数行确定为第一亮度图区域,将奇数行确定为第二亮度图区域。In the case of segmenting the target luminance map row by row, the imaging device may determine even-numbered rows as the first luminance map area, and odd-numbered rows as the second luminance map area.
操作1706,利用多个第一亮度图区域组成第一切分亮度图,利用多个第二亮度图区域组成第二切分亮度图。 Operation 1706, using a plurality of first brightness map regions to form a first segmented brightness map, and using a plurality of second brightness map regions to form a second segmented brightness map.
参考图18,设目标亮度图包括6行6列像素,则在对目标亮度图进行逐列切分的情况下,成像设备可以将目标亮度图的第1列像素、第3列像素和第5列像素确定为第二亮度图区域,可以将目标亮度图的第2列像素、第4列像素和第6列像素确定为第一亮度图区域,而后,成像设备可以将第一亮度图区域进行拼接,得到第一切分亮度图T1,该第一切分亮度图T1包括目标亮度图的第2列像素、第4列像素和第6列像素,成像设备可以将第二亮度图区域进行拼接,得到第二切分亮度图T2,该第二切分亮度图T2包括目标亮度图的第1列像素、第3列像素和第5列像素。Referring to FIG. 18, assuming that the target brightness map includes 6 rows and 6 columns of pixels, in the case of segmenting the target brightness map column by column, the imaging device can divide the first column pixel, the third column pixel, and the fifth column of the target brightness map. The column of pixels is determined as the second luminance map area, and the second, fourth, and sixth column pixels of the target luminance map can be determined as the first luminance map area, and then the imaging device can perform the first luminance map area Splicing to obtain the first sub-brightness map T1, the first sub-brightness map T1 includes the second, fourth, and sixth columns of the target brightness map, and the imaging device can splice the second brightness map area , A second segmented brightness map T2 is obtained, and the second segmented brightness map T2 includes the first column of pixels, the third column of pixels, and the fifth column of pixels of the target brightness map.
参考图19,设目标亮度图包括6行6列像素,则在对目标亮度图进行逐行切分的情况下,成像设备可以将目标亮度图的第1行像素、第3行像素和第5行像素确定为第二亮度图区域,可以将目标亮度图的第2行像素、第4行像素和第6行像素确定为第一亮度图区域,而后,成像设备可以将第一亮度图区域进行拼接,得到第一切分亮度图T3,该第一切分亮度图T3包括目标亮度图的第2行像素、第4行像素和第6行像素,成像设备可以将第二亮度图区域进行拼接,得到第二切分亮度图T4,该第二切分亮度图T4包括目标亮度图的第1行像素、第3行像素和第5行像素。19, assuming that the target brightness map includes 6 rows and 6 columns of pixels, in the case of segmenting the target brightness map row by row, the imaging device can divide the first row of pixels, the third row of pixels, and the fifth row of the target brightness map. Row pixels are determined as the second luminance map area, and the second row of pixels, the fourth row of pixels, and the sixth row of pixels of the target luminance map can be determined as the first luminance map area. Then, the imaging device can perform the first luminance map area. Splicing to obtain the first sub-brightness map T3, the first sub-brightness map T3 includes the second row of pixels, the fourth row of pixels, and the sixth row of pixels of the target brightness map, and the imaging device can splice the second brightness map area , A second segmented brightness map T4 is obtained, and the second segmented brightness map T4 includes the first row of pixels, the third row of pixels, and the fifth row of pixels of the target brightness map.
参考图20,提供了一种根据第一切分亮度图和第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差的方式,可以应用于图3所示的成像设备中,如图20所示,该方式可以包括以下操作:Referring to FIG. 20, there is provided a method for determining the phase difference of mutually matched pixels based on the position difference of the pixels matching each other in the first split brightness map and the second split brightness map, which can be applied to the method shown in FIG. 3 In the imaging device, as shown in FIG. 20, this method may include the following operations:
操作2002,当亮度图区域包括目标亮度图中的一行像素时,在第一切分亮度图包括的每行像素中确定第一邻近像素集合。In operation 2002, when the luminance map area includes a row of pixels in the target luminance map, a first set of neighboring pixels is determined in each row of pixels included in the first partial luminance map.
其中,第一邻近像素集合包括的像素与图像传感器中的同一像素点组对应。Wherein, the pixels included in the first adjacent pixel set correspond to the same pixel point group in the image sensor.
请参考上述图10所示的子亮度图,当亮度图区域包括目标亮度图中的一行像素时,也即是,在成像设备沿列的方向对目标亮度图进行逐行切分的情况下,在切分之后,由于该子亮度图中第一行的两个像素位于目标亮度图的同一行像素中,因此,该子亮度图中第一行的两个像素会位于同一亮度图区域中,并会位于同一切分亮度图中,同理地,该子亮度图中第二行的两个像素也会位于同一亮度区域中,并会位于另一切分亮度图中,假设该子亮度图中的第一行位于目标亮度图的偶数像素行中,则该子亮度图中第一行的两个像素位于第一切分亮度图中,该子亮度图中第二行的两个像素位于第二切分亮度图中。Please refer to the sub-luminance map shown in FIG. 10 above. When the luminance map area includes a row of pixels in the target luminance map, that is, when the imaging device divides the target luminance map row by row along the column direction, After segmentation, since the two pixels in the first row of the sub-luminance map are located in the same row of pixels in the target brightness map, the two pixels in the first row of the sub-luminance map will be located in the same brightness map area. And will be located in the same sub-brightness map. Similarly, the two pixels in the second row of the sub-brightness map will also be located in the same brightness area, and will be located in another sub-brightness map, assuming this sub-brightness map The first row of is located in the even-numbered pixel row of the target brightness map, then the two pixels in the first row of the sub-brightness map are located in the first sub-brightness map, and the two pixels in the second row of the sub-brightness map are located in the first sub-brightness map. Two-division brightness map.
成像设备可以将该子亮度图中第一行的两个像素确定为第一邻近像素集合,这是因为该子亮度图中第一行的两个像素与图像传感器中的同一像素点组(图8所示的像素点组)对应。The imaging device can determine the two pixels in the first row of the sub-brightness map as the first adjacent pixel set, because the two pixels in the first row of the sub-brightness map are the same pixel group in the image sensor (Fig. The pixel point group shown in 8) corresponds.
操作2004,对于每个第一邻近像素集合,成像设备在第二切分亮度图中搜索与该第一邻近像素集合对应的第一匹配像素集合。In operation 2004, for each first set of neighboring pixels, the imaging device searches for a first set of matching pixels corresponding to the first set of neighboring pixels in the second segmented luminance map.
对于每个第一邻近像素集合,成像设备可以在第一切分亮度图中获取该第一邻近像素集合周围的多个像素,并由该第一邻近像素集合以及该第一邻近像素集合周围的多个像素组成搜索像素矩阵,例如,该搜索像素矩阵可以包括3行3列共9个像素,接着,成像设备可以在第二切分亮度图中搜索与该搜索像素矩阵相似的像素矩阵。至于如何判断像素矩阵是否相似,上文中已经进行了说明,本申请实施例在此就不赘述了。For each first set of neighboring pixels, the imaging device may obtain a plurality of pixels around the first set of neighboring pixels in the first sub-brightness map, and determine the number of pixels around the first set of neighboring pixels and the set of neighboring pixels. A plurality of pixels form a search pixel matrix. For example, the search pixel matrix may include 9 pixels in 3 rows and 3 columns. Then, the imaging device may search for a pixel matrix similar to the search pixel matrix in the second segmented brightness map. As for how to determine whether the pixel matrices are similar, it has been described above, and the details of the embodiment of the present application are not repeated here.
在第二切分亮度图中搜索到与搜索像素矩阵相似的像素矩阵之后,成像设备可以从该搜索到的像素矩阵中提取出第一匹配像素集合。After searching for a pixel matrix similar to the searched pixel matrix in the second segmented brightness map, the imaging device may extract the first matching pixel set from the searched pixel matrix.
通过搜索得到的第一匹配像素集合中的像素与第一邻近像素集合中的像素分别对应于从不同方向射入镜头的成像光线在图像传感器中所成的不同的像。The pixels in the first matching pixel set and the pixels in the first adjacent pixel set obtained through the search respectively correspond to different images in the image sensor formed by imaging light entering the lens from different directions.
操作2006,根据每个第一邻近像素集合与每个第一匹配像素集合的位置差异,确定相互对应的第一邻近像素集合和第一匹配像素集合的相位差,得到第二方向的相位差值。In operation 2006, according to the position difference between each first adjacent pixel set and each first matched pixel set, determine the phase difference between the first adjacent pixel set and the first matched pixel set corresponding to each other to obtain the phase difference value in the second direction .
第一邻近像素集合与第一匹配像素集合的位置差异指的是:第一邻近像素集合在第一切分亮度图中的位置与第一匹配像素集合在第二切分亮度图中的位置的差异。The position difference between the first set of adjacent pixels and the first set of matched pixels refers to the difference between the position of the first set of adjacent pixels in the first segmented luminance map and the position of the first set of matched pixels in the second segmented luminance map. difference.
当得到的第一切分亮度图和第二切分亮度图可以被分别称为上图和下图,通过上图和下图获取到的相位差可以反映物体在竖直方向上的成像位置差异。When the obtained first segmented brightness map and second segmented brightness map can be called the upper image and the lower image, respectively, the phase difference obtained through the upper and lower images can reflect the difference in the imaging position of the object in the vertical direction. .
请参考21,其示出了一种根据第一切分亮度图和第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差的方式,可以应用于图3所示的成像设备中,如图21所示,该方式可以包括以下操作:Please refer to 21, which shows a method of determining the phase difference of the matching pixels according to the position difference of the matching pixels in the first split brightness map and the second split brightness map, which can be applied to FIG. 3 In the illustrated imaging device, as shown in FIG. 21, this method may include the following operations:
操作2102,当亮度图区域包括目标亮度图中的一列像素时,在第一切分亮度图包括的每列像素中确定第二邻近像素集合,其中,该第二邻近像素集合包括的像素与同一像素点组对应。In operation 2102, when the luminance map area includes a column of pixels in the target luminance map, a second set of neighboring pixels is determined in each column of pixels included in the first partial luminance map, wherein the pixels included in the second set of neighboring pixels are the same The pixel point group corresponds.
操作2104,对于每个第二邻近像素集合,在第二切分亮度图中搜索与该第二邻近像素集合对应的第二匹配像素集合。 Operation 2104, for each second set of neighboring pixels, search for a second set of matching pixels corresponding to the second set of neighboring pixels in the second segmented brightness map.
操作2106,根据每个第二邻近像素集合与每个第二匹配像素集合的位置差异,确定相互对应的第二邻近像素集合和第二匹配像素集合的相位差,得到第一方向的相位差值。Operation 2106: Determine the phase difference between the second adjacent pixel set and the second matched pixel set corresponding to each other according to the position difference between each second adjacent pixel set and each second matched pixel set, to obtain a phase difference value in the first direction .
操作2102至操作2104的技术过程与操作2002至操作2006的技术过程同理,本申请实施例在此不再赘述。The technical process from operation 2102 to operation 2104 is the same as the technical process from operation 2002 to operation 2006, and details are not described herein again in the embodiment of the present application.
当亮度图区域包括目标亮度图中的一列像素时,得到的第一切分亮度图和第二切分亮度图可以被分别称为左图和右图,通过左图和右图获取到的相位差可以反映物体在水平方向上的成像位置差异。When the brightness map area includes a column of pixels in the target brightness map, the obtained first segmented brightness map and the second segmented brightness map can be called the left image and the right image, respectively, and the phase obtained by the left image and the right image The difference can reflect the difference in the imaging position of the object in the horizontal direction.
由于当亮度图区域包括目标亮度图中的一列像素时,获取到的相位差可以反映物体在水平方向上的成像位置差异,当亮度图区域包括目标亮度图中的一行像素时,获取到的相位差可以反映物体在竖直方向上的成像位置差异,因此,按照本申请实施例获取到的相位差既可以反映物体在竖直方向上的成像位置差异又可以反映物体在水平方向上的成像位置差异,因此其精度较高。Since when the luminance map area includes a column of pixels in the target luminance map, the acquired phase difference can reflect the difference in the imaging position of the object in the horizontal direction. When the luminance map area includes a row of pixels in the target luminance map, the acquired phase difference The difference can reflect the difference in the imaging position of the object in the vertical direction. Therefore, the phase difference obtained according to the embodiment of the present application can reflect the difference in the imaging position of the object in the vertical direction as well as the imaging position of the object in the horizontal direction. Difference, so its accuracy is higher.
在一个实施例中,上述对焦方法还可以包括:根据离焦距离值生成深度值。离焦距离值可以计算合焦状态时的像距,根据像距以及焦距可以得到物距,该物距即为深度值。In an embodiment, the aforementioned focusing method may further include: generating a depth value according to the defocus distance value. The defocus distance value can calculate the image distance in the in-focus state, and the object distance can be obtained according to the image distance and the focal length. The object distance is the depth value.
应该理解的是,虽然图7、图11、图13、图16、图17至图21的流程图中的各个操作按照箭头的指示依次显示,但是这些操作并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些操作的执行并没有严格的顺序限制,这些操作可以以其它的顺序执行。而且,图7、图11、图13、图16、图17至图21中的至少一部分操作可以包括多个子操作或者多个阶段,这些子操作或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子操作或者阶段的执行顺序也不必然是依次进行,而是可以与其它操作或者其它操作的子操作或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the various operations in the flowcharts of Figures 7, 11, 13, 16, and 17 to 21 are displayed in sequence as indicated by the arrows, these operations are not necessarily performed in sequence in the order indicated by the arrows. . Unless there is a clear description in this article, there is no strict order restriction on the execution of these operations, and these operations can be executed in other orders. Moreover, at least a part of the operations in FIGS. 7, 11, 13, 16, and 17 to 21 may include multiple sub-operations or multiple stages, and these sub-operations or stages are not necessarily executed at the same time. They can be executed at different moments, and the execution order of these sub-operations or stages is not necessarily performed sequentially, but may be executed alternately or alternately with other operations or at least a part of the sub-operations or stages of other operations.
图22为一个实施例的相位差的获取装置的结构框图。如图22所示,该相位差的获取装置应用于电子设备,所述电子设备包括图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数;该相位差的获取装置包括场景检测模块2210和相位差获取模块2212。Fig. 22 is a structural block diagram of an apparatus for obtaining a phase difference according to an embodiment. As shown in FIG. 22, the phase difference acquisition device is applied to electronic equipment. The electronic equipment includes an image sensor. The image sensor includes a plurality of pixel point groups arranged in an array, and each of the pixel point groups includes an array array. Each pixel corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2; the phase difference acquisition device includes a scene detection module 2210 and a phase difference acquisition module 2212.
场景检测模块2210,用于对拍摄图像进行场景检测,得到场景类型;The scene detection module 2210 is used to perform scene detection on the captured image to obtain the scene type;
相位差获取模块2212,用于通过该图像传感器获取与所述场景类型对应的相位差值,所述相位差值为第一方向的相位差值或第二方向的相位差值;所述第一方向与所述第二方向成预设角度。The phase difference obtaining module 2212 is configured to obtain a phase difference value corresponding to the scene type through the image sensor, where the phase difference value is a phase difference value in a first direction or a phase difference value in a second direction; the first The direction forms a preset angle with the second direction.
在一个实施例中,场景检测模块2210还用于通过人工智能模型对拍摄图像进行场景检测,得到所述场景类型,所述人工智能模型为采用包含场景类型的样本图像训练得到的。In an embodiment, the scene detection module 2210 is further configured to perform scene detection on the captured image through an artificial intelligence model to obtain the scene type, and the artificial intelligence model is obtained by training using sample images containing the scene type.
在一个实施例中,场景检测模块2210还用于通过边缘算子检测拍摄图像的场景中的总边缘点数量、第一方向边缘点数量和第二方向边缘点数量;根据所述第一方向边缘点占总边缘点数量的比值,以及所述第二方向边缘点数量占所述总边缘点数量的比值,确定所述拍摄图像的场景类型。In an embodiment, the scene detection module 2210 is further configured to detect the total number of edge points, the number of edge points in the first direction, and the number of edge points in the second direction in the scene of the captured image through an edge operator; The ratio of the number of points to the total number of edge points and the ratio of the number of edge points in the second direction to the total number of edge points determine the scene type of the captured image.
在一个实施例中,相位差获取模块2212还用于当所述场景类型为水平纹理场景,则通过所述图像传感器获取第二方向的相位差值;当所述场景类型为竖直纹理场景,则通过所述图像传感器获取第一方向的相位差值。In one embodiment, the phase difference obtaining module 2212 is further configured to obtain the phase difference value in the second direction through the image sensor when the scene type is a horizontal texture scene; when the scene type is a vertical texture scene, Then, the phase difference value in the first direction is obtained by the image sensor.
在一个实施例中,相位差获取模块2212包括亮度确定单元和相位差确定单元。In one embodiment, the phase difference acquisition module 2212 includes a brightness determination unit and a phase difference determination unit.
亮度确定单元,用于根据每个所述像素点组包括的像素点的亮度值获取目标亮度图。The brightness determining unit is configured to obtain a target brightness map according to the brightness value of the pixel points included in each pixel point group.
相位差确定单元,用于对所述目标亮度图进行切分处理,得到第一切分亮度图和第二切分亮度图,并根据所述第一切分亮度图和所述第二切分亮度图中相互匹配的像素的位置差异,确定所述相互匹配的像素的相位差值;根据所述相互匹配的像素的相位差值确定第一方向的相位差值或第二方向的相位差值。The phase difference determining unit is configured to perform segmentation processing on the target brightness map to obtain a first segmented brightness map and a second segmented brightness map, and according to the first segmented brightness map and the second segmented brightness map Determine the phase difference value of the mutually matched pixels in the brightness map to determine the phase difference value of the mutually matched pixels; determine the phase difference value in the first direction or the phase difference value in the second direction according to the phase difference value of the mutually matched pixels .
在一个实施例中,亮度确定单元还用于对于每个所述像素点组,根据所述像素点组中每个像素点的相同位置处的子像素点的亮度值,获取所述像素点组对应的子亮度图;根据每个所述像素点组对应的子亮度图生成所述目标亮度图。In an embodiment, the brightness determining unit is further configured to, for each pixel point group, obtain the pixel point group according to the brightness value of the sub-pixel point at the same position of each pixel point in the pixel point group Corresponding sub-brightness map; generating the target brightness map according to the sub-brightness map corresponding to each pixel point group.
在一个实施例中,亮度确定单元还用于从每个所述像素点中确定相同位置处的子像素点,得到多个子像素点集合,其中,每个所述子像素点集合包括的子像素点在像素点中的位置均相同;对于每个所述子像素点集合,根据所述子像素点集合中每个子像素点的亮度值,获取所述子像素点集合对应的亮度值;根据每个所述子像素集合对应的亮度值生成所述子亮度图。In an embodiment, the brightness determining unit is further configured to determine the sub-pixel points at the same position from each of the pixel points to obtain a plurality of sub-pixel point sets, wherein the sub-pixel points included in each sub-pixel point set are The positions of the points in the pixel points are all the same; for each sub-pixel point set, according to the brightness value of each sub-pixel point in the sub-pixel point set, the brightness value corresponding to the sub-pixel point set is obtained; The brightness values corresponding to the sub-pixel sets generate the sub-brightness map.
在一个实施例中,亮度确定单元还用于确定所述子像素点集合中每个子像素点对应的颜色系数,所述颜色系数是根据子像素点对应的颜色通道确定的;将所述子像素点集合中每个子像素点对应的颜色系数与亮度值相乘,得到所述子像素点集合中每个子像素点的加权亮度;将所述子像素点集合中每个子像素点的加权亮度相加,得到所述子像素点集合对应的亮度值。In an embodiment, the brightness determination unit is further configured to determine a color coefficient corresponding to each sub-pixel point in the sub-pixel point set, and the color coefficient is determined according to the color channel corresponding to the sub-pixel point; Multiply the color coefficient corresponding to each sub-pixel point in the point set by the brightness value to obtain the weighted brightness of each sub-pixel point in the sub-pixel point set; add the weighted brightness of each sub-pixel point in the sub-pixel point set To obtain the brightness value corresponding to the set of sub-pixel points.
在一个实施例中,每个所述像素点包括阵列排布的多个子像素点;In an embodiment, each of the pixel points includes a plurality of sub-pixel points arranged in an array;
亮度确定单元还用于从每个所述像素点组中确定目标像素点,得到多个所述目标像素点;根据每个所述目标像素点包括的子像素点的亮度值生成每个所述像素点组对应的子亮度图;根据每个所述像素点组对应的子亮度图生成所述目标亮度图。The brightness determining unit is further configured to determine a target pixel point from each of the pixel point groups to obtain a plurality of the target pixel points; generate each of the target pixel points according to the brightness value of the sub-pixel points included in each target pixel point A sub-luminance map corresponding to the pixel point group; generating the target brightness map according to the sub-luminance map corresponding to each pixel point group.
在一个实施例中,亮度确定单元还用于从每个所述像素点组中确定颜色通道为绿色的像素点;将所述颜色通道为绿色的像素点确定为所述目标像素点。In an embodiment, the brightness determining unit is further configured to determine a pixel with a green color channel from each of the pixel point groups; and determine a pixel with a green color channel as the target pixel.
在一个实施例中,亮度确定单元还用于从每个所述像素点组中确定相同位置处的像素点,得到多个像素点集合,其中,每个所述像素点集合包括的像素点在像素点组中的位置均相同;根据所述多个像素点集合中像素点的亮度值,生成与所述多个像素点集合一一对应的多个所述目标亮度图;In one embodiment, the brightness determining unit is further configured to determine the pixel points at the same position from each of the pixel point groups to obtain a plurality of pixel point sets, wherein the pixel points included in each pixel point set are The positions in the pixel point groups are all the same; according to the brightness values of the pixel points in the plurality of pixel point sets, generating a plurality of the target brightness maps corresponding to the plurality of pixel point sets one-to-one;
相位差确定单元还用于对于每个所述目标亮度图,根据所述相互匹配的像素的相位差生成与所述目标亮度图对应的中间相位差图;根据每个所述目标亮度图对应的中间相位差图,生成所述第一方向的相位差值和所述第二方向的相位差值。The phase difference determining unit is further configured to generate an intermediate phase difference map corresponding to the target brightness map according to the phase difference of the mutually matched pixels for each of the target brightness maps; The intermediate phase difference map generates the phase difference value in the first direction and the phase difference value in the second direction.
在一个实施例中,相位差确定单元还用于从每个所述中间相位差图中确定相同位置处的像素,得到多个相位差像素集合,其中,每个所述相位差像素集合包括的像素在中间相位差图中的位置均相同;In an embodiment, the phase difference determining unit is further configured to determine pixels at the same position from each of the intermediate phase difference maps to obtain a plurality of phase difference pixel sets, wherein each of the phase difference pixel sets includes The positions of the pixels in the intermediate phase difference diagram are all the same;
对于每个所述相位差像素集合,将所述相位差像素集合中的像素进行拼接,得到与所述相位差像素集合对应的子相位差图;For each phase difference pixel set, splicing pixels in the phase difference pixel set to obtain a sub-phase difference map corresponding to the phase difference pixel set;
将得到的多个所述子相位差图进行拼接得到目标相位差图,所述目标相位差图中包括第一方向的相位差值和第二方向的相位差值。The obtained multiple sub-phase difference maps are spliced to obtain a target phase difference map, and the target phase difference map includes a phase difference value in a first direction and a phase difference value in a second direction.
在一个实施例中,相位差确定单元还用于对所述目标亮度图进行切分处理,得到多个亮度图区域,每个所述亮度图区域包括所述目标亮度图中的一行像素,或者,每个所述亮度图区域包括所述目标亮度图中的一列像素;In an embodiment, the phase difference determining unit is further configured to perform segmentation processing on the target brightness map to obtain multiple brightness map regions, each of the brightness map regions includes a row of pixels in the target brightness map, or , Each of the brightness map regions includes a column of pixels in the target brightness map;
从所述多个亮度图区域中获取多个第一亮度图区域和多个第二亮度图区域,所述第一亮度图区域包括所述目标亮度图中偶数行的像素,或者,所述第一亮度图区域包括所述目标亮度图中偶数列的像素,所述第二亮度图区域包括所述目标亮度图中奇数行的像素,或者,所述第二亮度图区域包括所述目标亮度图中奇数列的像素;Acquire a plurality of first brightness map regions and a plurality of second brightness map regions from the plurality of brightness map regions, where the first brightness map region includes pixels in even rows of the target brightness map, or the first brightness map region A luminance map area includes pixels in even-numbered columns in the target luminance map, the second luminance map area includes pixels in odd rows in the target luminance map, or the second luminance map area includes the target luminance map Pixels in odd columns;
利用所述多个第一亮度图区域组成所述第一切分亮度图,利用所述多个第二亮度图区域组成所述第二切分亮度图。The multiple first brightness map regions are used to form the first split brightness map, and the multiple second brightness map regions are used to form the second split brightness map.
在一个实施例中,相位差确定单元还用于当所述亮度图区域包括所述目标亮度图中的一行像素时,在所述第一切分亮度图包括的每行像素中确定第一邻近像素集合,所述第一邻近像素集合包括的像素与同一像素点组对应;In an embodiment, the phase difference determining unit is further configured to determine the first neighboring pixel in each row of pixels included in the first divided brightness map when the brightness map area includes a row of pixels in the target brightness map. A pixel set, the pixels included in the first adjacent pixel set correspond to the same pixel point group;
对于每个所述第一邻近像素集合,在所述第二切分亮度图中搜索与所述第一邻近像素集合对应的第一匹配像素集合;For each of the first neighboring pixel sets, searching for a first matching pixel set corresponding to the first neighboring pixel set in the second segmented brightness map;
根据每个所述第一邻近像素集合与每个所述第一匹配像素集合的位置差异,确定相互对应的所述第一邻近像素集合和所述第一匹配像素集合的相位差,得到第二方向的相位差值。According to the position difference between each of the first adjacent pixel set and each of the first matched pixel set, determine the phase difference between the first adjacent pixel set and the first matched pixel set corresponding to each other to obtain the second The phase difference of the direction.
在一个实施例中,相位差确定单元还用于当所述亮度图区域包括所述目标亮度图中的一列像素时,在所述第一切分亮度图包括的每列像素中确定第二邻近像素集合,所述第二邻近像素集合包括的像素与同一像素点组对应;In an embodiment, the phase difference determining unit is further configured to determine a second neighboring pixel in each column of pixels included in the first divided brightness map when the brightness map area includes a column of pixels in the target brightness map. A pixel set, the pixels included in the second adjacent pixel set correspond to the same pixel point group;
对于每个所述第二邻近像素集合,在所述第二切分亮度图中搜索与所述第二邻近像素集合对应的第二匹配像素集合;For each of the second adjacent pixel sets, searching for a second matching pixel set corresponding to the second adjacent pixel set in the second segmented brightness map;
根据每个所述第二邻近像素集合与每个所述第二匹配像素集合的位置差异,确定相互对应的所述第二邻近像素集合和所述第二匹配像素集合的相位差,得到第一方向的相位差值。According to the position difference between each second adjacent pixel set and each second matched pixel set, determine the phase difference between the second adjacent pixel set and the second matched pixel set corresponding to each other to obtain the first The phase difference of the direction.
在一个实施例中,该相位差的获取装置还包括处理模块和控制模块。In an embodiment, the apparatus for acquiring the phase difference further includes a processing module and a control module.
处理模块用于根据所述相位差值确定离焦距离值及移动方向;The processing module is configured to determine the defocus distance value and the moving direction according to the phase difference value;
控制模块用于根据所述离焦距离值和移动方向控制镜头移动以对焦。The control module is used for controlling the movement of the lens to focus according to the defocus distance value and the moving direction.
在一个实施例中,该相位差的获取装置还包括校正模块。校正模块用于对所述计算得到的相位差值采用增益图进行校正。In an embodiment, the apparatus for acquiring the phase difference further includes a correction module. The correction module is used for correcting the calculated phase difference value by using a gain diagram.
上述相位差的获取装置中各个模块的划分仅用于举例说明,在其他实施例中,可将相位差的获取装置按照需要划分为不同的模块,以完成上述相位差的获取装置的全部或部分功能。The division of the modules in the above-mentioned phase difference acquisition device is only for illustration. In other embodiments, the phase difference acquisition device can be divided into different modules as needed to complete all or part of the above-mentioned phase difference acquisition device. Features.
图23为一个实施例中电子设备的内部结构示意图。如图23所示,该电子设备包括通过系统总线连接的处理器和存储器。其中,该处理器用于提供计算和控制能力,支撑整个电子设备的运行。存储器可包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种对焦方法。内存储器为非易失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。该电子设备可以是手机、平板电脑或者个人数字助理或穿戴式设备等。FIG. 23 is a schematic diagram of the internal structure of an electronic device in an embodiment. As shown in FIG. 23, the electronic device includes a processor and a memory connected through a system bus. Among them, the processor is used to provide computing and control capabilities to support the operation of the entire electronic device. The memory may include a non-volatile storage medium and internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement a focusing method provided in the following embodiments. The internal memory provides a cached operating environment for the operating system computer program in the non-volatile storage medium. The electronic device can be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
本申请实施例中提供的对焦装置中的各个模块的实现可为计算机程序的形式。该计算机程序可在终端或服务器上运行。该计算机程序构成的程序模块可存储在终端或服务器的存储器上。该计算机程序被处理器执行时,实现本申请实施例中所描述方法的操作。The implementation of each module in the focusing device provided in the embodiment of the present application may be in the form of a computer program. The computer program can be run on a terminal or a server. The program module composed of the computer program can be stored in the memory of the terminal or the server. When the computer program is executed by the processor, the operation of the method described in the embodiment of the present application is realized.
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计 算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行相位差的获取方法的操作。The embodiment of the present application also provides a computer-readable storage medium. One or more non-volatile computer-readable storage media containing computer-executable instructions, when the computer-executable instructions are executed by one or more processors, cause the processors to perform the operations of the phase difference acquisition method .
一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行相位差的获取方法。A computer program product containing instructions, when it runs on a computer, causes the computer to execute the phase difference acquisition method.
本申请实施例所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。合适的非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。Any reference to memory, storage, database, or other media used in the embodiments of the present application may include non-volatile and/or volatile memory. Suitable non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM), which acts as external cache memory. As an illustration and not a limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only express several implementation manners of the present application, and the description is relatively specific and detailed, but it should not be understood as a limitation to the patent scope of the present application. It should be pointed out that for those of ordinary skill in the art, without departing from the concept of this application, several modifications and improvements can be made, and these all fall within the protection scope of this application. Therefore, the scope of protection of the patent of this application shall be subject to the appended claims.

Claims (20)

  1. 一种相位差的获取方法,其特征在于,应用于电子设备,所述电子设备包括图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数;所述方法包括:A method for obtaining a phase difference, characterized in that it is applied to an electronic device, the electronic device includes an image sensor, the image sensor includes a plurality of pixel point groups arranged in an array, each of the pixel point group includes an array array M*N pixels of the cloth; each pixel corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2; the method includes:
    对拍摄图像进行场景检测,得到场景类型;及Perform scene detection on the captured image to obtain the scene type; and
    通过所述图像传感器获取与所述场景类型对应的相位差值,所述相位差值为第一方向的相位差值或第二方向的相位差值;所述第一方向与所述第二方向成预设角度。Obtain a phase difference value corresponding to the scene type through the image sensor, where the phase difference value is a phase difference value in a first direction or a phase difference value in a second direction; the first direction and the second direction Into a preset angle.
  2. 根据权利要求1所述的方法,其特征在于,所述对拍摄图像进行场景检测,得到场景类型,包括:The method according to claim 1, wherein said performing scene detection on the captured image to obtain the scene type comprises:
    通过人工智能模型对拍摄图像进行场景检测,得到所述场景类型,所述人工智能模型为采用包含场景类型的样本图像训练得到的。The scene type is obtained by performing scene detection on the captured image through an artificial intelligence model, and the artificial intelligence model is obtained by training using a sample image containing the scene type.
  3. 根据权利要求1所述的方法,其特征在于,所述对拍摄图像进行场景检测,得到场景类型,包括:The method according to claim 1, wherein said performing scene detection on the captured image to obtain the scene type comprises:
    通过边缘算子检测拍摄图像的场景中的总边缘点数量、第一方向边缘点数量和第二方向边缘点数量;及Detecting the total number of edge points, the number of edge points in the first direction, and the number of edge points in the second direction in the scene of the captured image by an edge operator; and
    根据所述第一方向边缘点占总边缘点数量的比值,以及所述第二方向边缘点数量占所述总边缘点数量的比值,确定所述拍摄图像的场景类型。The scene type of the captured image is determined according to the ratio of the number of edge points in the first direction to the total number of edge points and the ratio of the number of edge points in the second direction to the total number of edge points.
  4. 根据权利要求1所述的方法,其特征在于,通过所述图像传感器获取与所述场景类型对应的相位差值,包括:The method according to claim 1, wherein acquiring the phase difference value corresponding to the scene type by the image sensor comprises:
    当所述场景类型为水平纹理场景,则通过所述图像传感器获取第二方向的相位差值;及When the scene type is a horizontal texture scene, acquiring the phase difference value in the second direction through the image sensor; and
    当所述场景类型为竖直纹理场景,则通过所述图像传感器获取第一方向的相位差值。When the scene type is a vertical texture scene, the phase difference value in the first direction is acquired by the image sensor.
  5. 根据权利要求1所述的方法,其特征在于,所述通过所述图像传感器获取与所述场景类型对应的相位差值,包括:The method according to claim 1, wherein the obtaining the phase difference value corresponding to the scene type by the image sensor comprises:
    根据每个所述像素点组包括的像素点的亮度值获取目标亮度图;Acquiring a target brightness map according to the brightness value of the pixel points included in each pixel point group;
    对所述目标亮度图进行切分处理,得到第一切分亮度图和第二切分亮度图,并根据所述第一切分亮度图和所述第二切分亮度图中相互匹配的像素的位置差异,确定所述相互匹配的像素的相位差值;及根据所述相互匹配的像素的相位差值确定第一方向的相位差值或第二方向的相位差值。Perform segmentation processing on the target brightness map to obtain a first segmented brightness map and a second segmented brightness map, and pixels matching each other according to the first segmented brightness map and the second segmented brightness map Determine the phase difference value of the mutually matched pixels; and determine the phase difference value in the first direction or the phase difference value in the second direction according to the phase difference value of the mutually matched pixels.
  6. 根据权利要求5所述的方法,其特征在于,所述对所述目标亮度图进行切分处理,得到第一切分亮度图和第二切分亮度图,包括:The method according to claim 5, wherein the segmentation processing on the target brightness map to obtain a first segmented brightness map and a second segmented brightness map comprises:
    对所述目标亮度图进行切分处理,得到多个亮度图区域,每个所述亮度图区域包括所述目标亮度图中的一行像素,或者,每个所述亮度图区域包括所述目标亮度图中的一列像素;The target brightness map is segmented to obtain multiple brightness map regions, each of the brightness map regions includes a row of pixels in the target brightness map, or each of the brightness map regions includes the target brightness A column of pixels in the picture;
    从所述多个亮度图区域中获取多个第一亮度图区域和多个第二亮度图区域,所述第一亮度图区域包括所述目标亮度图中偶数行的像素,或者,所述第一亮度图区域包括所述目标亮度图中偶数列的像素,所述第二亮度图区域包括所述目标亮度图中奇数行的像素,或者,所述第二亮度图区域包括所述目标亮度图中奇数列的像素;及Acquire a plurality of first brightness map regions and a plurality of second brightness map regions from the plurality of brightness map regions, where the first brightness map region includes pixels in even rows of the target brightness map, or the first brightness map region A luminance map area includes pixels in even-numbered columns in the target luminance map, the second luminance map area includes pixels in odd rows in the target luminance map, or the second luminance map area includes the target luminance map Pixels in odd columns; and
    利用所述多个第一亮度图区域组成所述第一切分亮度图,利用所述多个第二亮度图区域组成所述第二切分亮度图。The multiple first brightness map regions are used to form the first split brightness map, and the multiple second brightness map regions are used to form the second split brightness map.
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述第一切分亮度图和所述第二切分亮度图中相互匹配的像素点的位置差异,确定相互匹配的像素的相位差,包括:The method according to claim 6, wherein the phases of the pixels that match each other are determined according to the position difference of the pixel points that match each other in the first split brightness map and the second split brightness map Poor, including:
    当所述亮度图区域包括所述目标亮度图中的一行像素时,在所述第一切分亮度图包括的每行像素中确定第一邻近像素集合,所述第一邻近像素集合包括的像素与同一像素点组对应;When the brightness map area includes a row of pixels in the target brightness map, a first set of neighboring pixels is determined in each row of pixels included in the first split brightness map, and the pixels included in the first neighboring pixel set Correspond to the same pixel group;
    对于每个所述第一邻近像素集合,在所述第二切分亮度图中搜索与所述第一邻近像素集合对应的第一匹配像素集合;及For each of the first neighboring pixel sets, searching for a first matching pixel set corresponding to the first neighboring pixel set in the second segmented brightness map; and
    根据每个所述第一邻近像素集合与每个所述第一匹配像素集合的位置差异,确定相互对应的所述第一邻近像素集合和所述第一匹配像素集合的相位差,得到第二方向的相位差值。According to the position difference between each of the first adjacent pixel set and each of the first matched pixel set, determine the phase difference between the first adjacent pixel set and the first matched pixel set corresponding to each other to obtain the second The phase difference of the direction.
  8. 根据权利要求6所述的方法,其特征在于,所述根据所述第一切分亮度图和所述第二切分亮度图中相互匹配的像素的位置差异,确定相互匹配的像素的相位差,包括:The method according to claim 6, wherein the phase difference of the pixels that match each other is determined based on the position difference of the pixels that match each other in the first split brightness map and the second split brightness map ,include:
    当所述亮度图区域包括所述目标亮度图中的一列像素时,在所述第一切分亮度图包括的每列像素中确定第二邻近像素集合,所述第二邻近像素集合包括的像素与同一像素点组对应;When the brightness map area includes a column of pixels in the target brightness map, a second set of neighboring pixels is determined in each column of pixels included in the first split brightness map, and the pixels included in the second set of neighboring pixels Correspond to the same pixel group;
    对于每个所述第二邻近像素集合,在所述第二切分亮度图中搜索与所述第二邻近像素集合对应的第二匹配像素集合;及For each of the second adjacent pixel sets, searching for a second matching pixel set corresponding to the second adjacent pixel set in the second segmented luminance map; and
    根据每个所述第二邻近像素集合与每个所述第二匹配像素集合的位置差异,确定相互对应的所述第二邻近像素集合和所述第二匹配像素集合的相位差,得到第一方向的相位差值。According to the position difference between each second adjacent pixel set and each second matched pixel set, determine the phase difference between the second adjacent pixel set and the second matched pixel set corresponding to each other to obtain the first The phase difference of the direction.
  9. 根据权利要求5所述的方法,其特征在于,每个所述像素点包括阵列排布的多个子像素点,所述根据每个所述像素点组包括的像素点的亮度值获取目标亮度图,包括:The method according to claim 5, wherein each pixel point includes a plurality of sub-pixel points arranged in an array, and the target brightness map is obtained according to the brightness value of the pixel points included in each pixel point group ,include:
    对于每个所述像素点组,根据所述像素点组中每个像素点的相同位置处的子像素点的亮度值,获取所述像素点组对应的子亮度图;及For each pixel point group, obtain the sub-luminance map corresponding to the pixel point group according to the brightness value of the sub-pixel point at the same position of each pixel point in the pixel point group; and
    根据每个所述像素点组对应的子亮度图生成所述目标亮度图。The target brightness map is generated according to the sub-brightness map corresponding to each pixel point group.
  10. 根据权利要求9所述的方法,其特征在于,所述根据所述像素点组中每个像素点的相同位置处的子像素点的亮度值,获取所述像素点组对应的子亮度图,包括:The method according to claim 9, wherein the obtaining the sub-luminance map corresponding to the pixel point group according to the brightness value of the sub-pixel point at the same position of each pixel point in the pixel point group, include:
    从每个所述像素点中确定相同位置处的子像素点,得到多个子像素点集合,其中,每个所述子像素点集合包括的子像素点在像素点中的位置均相同;Determine the sub-pixel points at the same position from each of the pixel points to obtain a plurality of sub-pixel point sets, wherein the positions of the sub-pixel points included in each of the sub-pixel point sets in the pixel points are all the same;
    对于每个所述子像素点集合,根据所述子像素点集合中每个子像素点的亮度值,获取所述子像素点集合对应的亮度值;及For each set of sub-pixel points, obtaining a brightness value corresponding to the set of sub-pixel points according to the brightness value of each sub-pixel point in the set of sub-pixel points; and
    根据每个所述子像素集合对应的亮度值生成所述子亮度图。The sub-brightness map is generated according to the brightness value corresponding to each of the sub-pixel sets.
  11. 根据权利要求10所述的方法,其特征在于,所述根据所述子像素点集合中每个子像素点的亮度值,获取所述子像素点集合对应的亮度值,包括:The method according to claim 10, wherein the obtaining the brightness value corresponding to the sub-pixel point set according to the brightness value of each sub-pixel point in the sub-pixel point set comprises:
    确定所述子像素点集合中每个子像素点对应的颜色系数,所述颜色系数是根据子像素点对应的颜色通道确定的;Determining a color coefficient corresponding to each sub-pixel point in the sub-pixel point set, where the color coefficient is determined according to a color channel corresponding to the sub-pixel point;
    将所述子像素点集合中每个子像素点对应的颜色系数与亮度值相乘,得到所述子像素点集合中每个子像素点的加权亮度;及Multiply the color coefficient corresponding to each sub-pixel point in the sub-pixel point set by the brightness value to obtain the weighted brightness of each sub-pixel point in the sub-pixel point set; and
    将所述子像素点集合中每个子像素点的加权亮度相加,得到所述子像素点集合对应的亮度值。The weighted brightness of each sub-pixel in the set of sub-pixels is added to obtain the brightness value corresponding to the set of sub-pixels.
  12. 根据权利要求5所述的方法,其特征在于,每个所述像素点包括阵列排布的多个子像素点;The method according to claim 5, wherein each of the pixel points comprises a plurality of sub-pixel points arranged in an array;
    所述根据每个所述像素点组包括的像素点的亮度值获取目标亮度图,包括:The acquiring the target brightness map according to the brightness value of the pixel points included in each pixel point group includes:
    从每个所述像素点组中确定目标像素点,得到多个所述目标像素点;Determining a target pixel point from each of the pixel point groups to obtain a plurality of the target pixel points;
    根据每个所述目标像素点包括的子像素点的亮度值生成每个所述像素点组对应的子亮度图;及Generating a sub-brightness map corresponding to each pixel point group according to the brightness value of the sub-pixel points included in each target pixel point; and
    根据每个所述像素点组对应的子亮度图生成所述目标亮度图。The target brightness map is generated according to the sub-brightness map corresponding to each pixel point group.
  13. 根据权利要求12所述的方法,其特征在于,所述从每个所述像素点组中确定目标像素点,包括:The method according to claim 12, wherein said determining a target pixel point from each of said pixel point groups comprises:
    从每个所述像素点组中确定颜色通道为绿色的像素点;及Determining a pixel with a green color channel from each of the pixel groups; and
    将所述颜色通道为绿色的像素点确定为所述目标像素点。A pixel with a green color channel is determined as the target pixel.
  14. 根据权利要求5所述的方法,其特征在于,所述根据每个所述像素点组包括的像素点的亮度值获取目标亮度图,包括:The method according to claim 5, wherein the obtaining a target brightness map according to the brightness value of the pixel points included in each pixel point group comprises:
    从每个所述像素点组中确定相同位置处的像素点,得到多个像素点集合,其中,每个所述像素点集合包括的像素点在像素点组中的位置均相同;Determining the pixel points at the same position from each of the pixel point groups to obtain a plurality of pixel point sets, wherein the positions of the pixels included in each pixel point set in the pixel point group are all the same;
    根据所述多个像素点集合中像素点的亮度值,生成与所述多个像素点集合一一对应的多个所述目标亮度图;Generating, according to the brightness values of the pixel points in the plurality of pixel point sets, a plurality of the target brightness maps one-to-one corresponding to the plurality of pixel point sets;
    所述根据所述相互匹配的像素的相位差生成第一方向的相位差值或第二方向的相位差值,包括:The generating the phase difference value in the first direction or the phase difference value in the second direction according to the phase difference of the mutually matched pixels includes:
    对于每个所述目标亮度图,根据所述相互匹配的像素的相位差生成与所述目标亮度图对应的中间相位差图;For each target brightness map, generate an intermediate phase difference map corresponding to the target brightness map according to the phase difference of the mutually matched pixels;
    根据每个所述目标亮度图对应的中间相位差图,生成所述第一方向的相位差值或所述第二方向的相位差值。Generate the phase difference value in the first direction or the phase difference value in the second direction according to the intermediate phase difference map corresponding to each target brightness map.
  15. 根据权利要求14所述的方法,其特征在于,所述根据每个所述目标亮度图对应的中间相位差图,生成所述第一方向的相位差值或所述第二方向的相位差值,包括:The method according to claim 14, wherein the phase difference value in the first direction or the phase difference value in the second direction is generated according to an intermediate phase difference map corresponding to each of the target brightness maps ,include:
    从每个所述中间相位差图中确定相同位置处的像素,得到多个相位差像素集合,其中,每个所述相位差像素集合包括的像素在中间相位差图中的位置均相同;Determining the pixels at the same position from each of the intermediate phase difference maps to obtain a plurality of phase difference pixel sets, wherein the positions of the pixels included in each of the phase difference pixel sets are all the same in the intermediate phase difference map;
    对于每个所述相位差像素集合,将所述相位差像素集合中的像素进行拼接,得到与所述相位差像素集合对应的子相位差图;及For each of the phase difference pixel sets, splicing pixels in the phase difference pixel set to obtain a sub-phase difference map corresponding to the phase difference pixel set; and
    将得到的多个所述子相位差图进行拼接得到目标相位差图,所述目标相位差图中包括第一方向的相位差值或第二方向的相位差值。The obtained multiple sub-phase difference maps are spliced to obtain a target phase difference map, and the target phase difference map includes the phase difference value in the first direction or the phase difference value in the second direction.
  16. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    根据所述相位差值确定离焦距离值;及Determining the defocus distance value according to the phase difference value; and
    根据所述离焦距离值控制镜头移动以对焦。The lens is controlled to move to focus according to the defocus distance value.
  17. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, wherein the method further comprises:
    对所述计算得到的相位差值采用增益图进行校正。The phase difference value obtained by the calculation is corrected by using a gain map.
  18. 一种相位差的获取装置,其特征在于,应用于电子设备,所述电子设备包括图像传感器,所述图像传感器包括阵列排布的多个像素点组,每个所述像素点组包括阵列排布的M*N个像素点;每个像素点对应一个感光单元,其中,M和N均为大于或等于2的自然数;所述装置包括:A phase difference acquisition device, which is characterized in that it is applied to an electronic device, the electronic device includes an image sensor, the image sensor includes a plurality of pixel point groups arranged in an array, and each of the pixel point groups includes an array array M*N pixel points of the cloth; each pixel point corresponds to a photosensitive unit, where M and N are both natural numbers greater than or equal to 2; the device includes:
    场景检测模块,用于对拍摄图像进行场景检测,得到场景类型;The scene detection module is used to perform scene detection on the captured image to obtain the scene type;
    相位差获取模块,用于通过所述图像传感器获取所述场景类型对应的相位差值,所述相位差值为第一方向的相位差值或第二方向的相位差值;所述第一方向与所述第二方向成预设角度。The phase difference acquisition module is configured to acquire the phase difference value corresponding to the scene type through the image sensor, where the phase difference value is a phase difference value in a first direction or a phase difference value in a second direction; the first direction It forms a preset angle with the second direction.
  19. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,实现如权利要求1至17中任一项所述的方法的步骤。An electronic device comprising a memory and a processor, and a computer program is stored in the memory, and when the computer program is executed by the processor, the steps of the method according to any one of claims 1 to 17 are realized.
  20. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至17中任一项所述的方法的步骤。A computer-readable storage medium having a computer program stored thereon, wherein the computer program implements the steps of the method according to any one of claims 1 to 17 when the computer program is executed by a processor.
PCT/CN2020/120847 2019-11-12 2020-10-14 Phase difference obtaining method and apparatus, and electronic device WO2021093502A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911101422.7A CN112866548B (en) 2019-11-12 2019-11-12 Phase difference acquisition method and device and electronic equipment
CN201911101422.7 2019-11-12

Publications (1)

Publication Number Publication Date
WO2021093502A1 true WO2021093502A1 (en) 2021-05-20

Family

ID=75911769

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/120847 WO2021093502A1 (en) 2019-11-12 2020-10-14 Phase difference obtaining method and apparatus, and electronic device

Country Status (2)

Country Link
CN (1) CN112866548B (en)
WO (1) WO2021093502A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938604A (en) * 2021-09-22 2022-01-14 深圳市汇顶科技股份有限公司 Focusing method, focusing device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140022408A1 (en) * 2012-07-20 2014-01-23 Canon Kabushiki Kaisha Image capture apparatus, method of controlling image capture apparatus, and electronic device
CN103974060A (en) * 2013-01-31 2014-08-06 华为技术有限公司 Method and device for adjusting video quality
CN107615484A (en) * 2015-06-03 2018-01-19 索尼公司 The manufacture method of solid state image pickup device, imaging device and solid state image pickup device
CN108322651A (en) * 2018-02-11 2018-07-24 广东欧珀移动通信有限公司 Image pickup method and device, electronic equipment, computer readable storage medium
CN109819173A (en) * 2017-11-22 2019-05-28 浙江舜宇智能光学技术有限公司 Depth integration method and TOF camera based on TOF imaging system

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0854373B1 (en) * 1996-12-17 2004-05-12 Canon Kabushiki Kaisha Focus detecting device
JP2006071656A (en) * 2004-08-31 2006-03-16 Canon Inc Focus detector
CN101813867A (en) * 2009-02-20 2010-08-25 张小频 Half TTL (Through The Lens) reflector-free phase comparison focusing digital camera system
JP5387856B2 (en) * 2010-02-16 2014-01-15 ソニー株式会社 Image processing apparatus, image processing method, image processing program, and imaging apparatus
CN103493484B (en) * 2011-03-31 2015-09-02 富士胶片株式会社 Imaging device and formation method
WO2014064875A1 (en) * 2012-10-24 2014-05-01 ソニー株式会社 Image processing device and image processing method
KR20150096224A (en) * 2014-02-14 2015-08-24 삼성전자주식회사 Solid image sensor, digital camera and method for preforming autofocus
TWI537667B (en) * 2014-09-01 2016-06-11 光寶電子(廣州)有限公司 Image capturing device and auto-focus method thereof
CN106027905B (en) * 2016-06-29 2019-05-21 努比亚技术有限公司 A kind of method and mobile terminal for sky focusing
CN106331499B (en) * 2016-09-13 2019-10-29 努比亚技术有限公司 Focusing method and photographing device
CN107040702B (en) * 2017-04-28 2020-06-05 Oppo广东移动通信有限公司 Image sensor, focusing control method, imaging device and mobile terminal
CN109862271B (en) * 2019-02-27 2021-03-16 上海创功通讯技术有限公司 Phase difference focusing method, electronic device and computer storage medium
CN109922270A (en) * 2019-04-17 2019-06-21 德淮半导体有限公司 Phase focus image sensor chip
CN110248095B (en) * 2019-06-26 2021-01-15 Oppo广东移动通信有限公司 Focusing device, focusing method and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140022408A1 (en) * 2012-07-20 2014-01-23 Canon Kabushiki Kaisha Image capture apparatus, method of controlling image capture apparatus, and electronic device
CN103974060A (en) * 2013-01-31 2014-08-06 华为技术有限公司 Method and device for adjusting video quality
CN107615484A (en) * 2015-06-03 2018-01-19 索尼公司 The manufacture method of solid state image pickup device, imaging device and solid state image pickup device
CN109819173A (en) * 2017-11-22 2019-05-28 浙江舜宇智能光学技术有限公司 Depth integration method and TOF camera based on TOF imaging system
CN108322651A (en) * 2018-02-11 2018-07-24 广东欧珀移动通信有限公司 Image pickup method and device, electronic equipment, computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938604A (en) * 2021-09-22 2022-01-14 深圳市汇顶科技股份有限公司 Focusing method, focusing device, electronic equipment and storage medium
CN113938604B (en) * 2021-09-22 2023-05-09 深圳市汇顶科技股份有限公司 Focusing method, focusing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112866548B (en) 2022-06-14
CN112866548A (en) 2021-05-28

Similar Documents

Publication Publication Date Title
US11856291B2 (en) Thin multi-aperture imaging system with auto-focus and methods for using same
US10397465B2 (en) Extended or full-density phase-detection autofocus control
US9338380B2 (en) Image processing methods for image sensors with phase detection pixels
JP2014011526A (en) Image processing apparatus, imaging apparatus, and image processing method
CN112866549B (en) Image processing method and device, electronic equipment and computer readable storage medium
WO2021093312A1 (en) Imaging assembly, focusing method and apparatus, and electronic device
US20120098947A1 (en) Producing universally sharp images
WO2021093637A1 (en) Focusing method and apparatus, electronic device, and computer readable storage medium
WO2023016144A1 (en) Focusing control method and apparatus, imaging device, electronic device, and computer readable storage medium
CN112866675B (en) Depth map generation method and device, electronic equipment and computer-readable storage medium
CN112866655B (en) Image processing method and device, electronic equipment and computer readable storage medium
JP6700986B2 (en) Image processing device, imaging device, image processing method, and program
JP6254843B2 (en) Image processing apparatus and control method thereof
CN112866510B (en) Focusing method and device, electronic equipment and computer readable storage medium
WO2021093502A1 (en) Phase difference obtaining method and apparatus, and electronic device
CN112866545B (en) Focusing control method and device, electronic equipment and computer readable storage medium
WO2021093528A1 (en) Focusing method and apparatus, and electronic device and computer readable storage medium
CN112866552B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112862880B (en) Depth information acquisition method, device, electronic equipment and storage medium
CN112866547B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866543B (en) Focusing control method and device, electronic equipment and computer readable storage medium
CN112866544B (en) Phase difference acquisition method, device, equipment and storage medium
KR20240045876A (en) Imaging method and device for auto focusing
CN112866550B (en) Phase difference acquisition method and apparatus, electronic device, and computer-readable storage medium
JP2018110299A (en) Image processing method, image processing device, and imaging device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20887063

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20887063

Country of ref document: EP

Kind code of ref document: A1