CN113938604A - Focusing method, focusing device, electronic equipment and storage medium - Google Patents

Focusing method, focusing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113938604A
CN113938604A CN202111107829.8A CN202111107829A CN113938604A CN 113938604 A CN113938604 A CN 113938604A CN 202111107829 A CN202111107829 A CN 202111107829A CN 113938604 A CN113938604 A CN 113938604A
Authority
CN
China
Prior art keywords
data
channel
resolution
phase
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111107829.8A
Other languages
Chinese (zh)
Other versions
CN113938604B (en
Inventor
李佳君
杨林
朱万清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Goodix Technology Co Ltd
Original Assignee
Shenzhen Goodix Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Goodix Technology Co Ltd filed Critical Shenzhen Goodix Technology Co Ltd
Priority to CN202111107829.8A priority Critical patent/CN113938604B/en
Publication of CN113938604A publication Critical patent/CN113938604A/en
Application granted granted Critical
Publication of CN113938604B publication Critical patent/CN113938604B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/704Pixels specially adapted for focusing, e.g. phase difference pixel sets

Abstract

The embodiment of the application discloses a focusing method, a focusing device, electronic equipment and a storage medium, and belongs to the technical field of images. Specifically, the method comprises the following steps: acquiring left phase source data and right phase source data; obtaining a low-resolution disparity map according to the left phase source data and the right phase source data; acquiring a high-resolution guide map; acquiring a high-resolution disparity map according to the low-resolution disparity map and the high-resolution guide map, wherein the resolution of the high-resolution disparity map is higher than that of the low-resolution disparity map; calculating high-resolution left phase detection data and high-resolution right phase detection data according to the left phase source data, the right phase source data and the high-resolution disparity map; and implementing focusing based on the high resolution left phase detection data and the high resolution right phase detection data. The focusing method, the focusing device, the electronic equipment and the storage medium provided by the embodiment of the invention can improve the focusing effect.

Description

Focusing method, focusing device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image technologies, and in particular, to a focusing method, a focusing apparatus, an electronic device, and a storage medium.
Background
Image sensors with color filter arrays have been widely used in applications such as mobile communication devices, safety protection devices, vehicle-mounted and home interactive devices, and in order to adapt to differentiated imaging scenes and improve imaging quality, the color filter arrays adopted by different image sensors have differences. Color digital cameras typically have an autofocus function in order to image the subject of a scene clearly. Phase Detection Auto Focus (PDAF) is more and more widely used than Contrast Detection Auto Focus (CDAF) due to the advantage of faster Detection focusing speed.
The phase detection pixels and the common imaging pixels are constructed on the same image sensor, the phase detection pixels cannot directly participate in imaging, and the occupied pixel points need to be compensated and corrected, so that the position and the density of the phase detection pixels on the sensor filter array can influence the final imaging quality.
The density of phase detection pixels is high, and the imaging quality is reduced; the density of phase detection pixels is low, and the focusing accuracy and speed are reduced.
The existing image sensor has the defects that the imaging quality is influenced due to overhigh density of phase detection pixels, or the density of the phase detection pixels is low, and the focusing precision and the sensitivity are poor.
Disclosure of Invention
The embodiment of the application provides a focusing method, a focusing device and electronic equipment, aiming at reducing the influence of phase detection pixels on imaging quality while ensuring focusing precision and speed, so as to better restore a real imaging scene. The scheme of the invention is suitable for a Charge-coupled Device (CCD) structure and a Complementary Metal-Oxide Semiconductor (CMOS) structure.
In a first aspect, an embodiment of the present application provides a focusing method, where the method includes:
the method comprises the steps of obtaining left phase source data and right phase source data, obtaining a low-resolution disparity map according to the left phase source data and the right phase source data, obtaining a high-resolution guide map, obtaining a high-resolution disparity map according to the low-resolution disparity map and the high-resolution guide map, calculating high-resolution left phase detection data and high-resolution right phase detection data according to the left phase source data, the right phase source data and the high-resolution disparity map, and achieving focusing based on the high-resolution left phase detection data and the high-resolution right phase detection data.
In one possible embodiment, the image sensor includes a pixel array divided into two or more nodes including pixels of E × F number, the nodes including: the pixel array comprises shadow nodes and blank nodes, wherein a plurality of imaging pixels and X phase detection pixels are distributed in the shadow nodes, the blank nodes are all imaging pixels, and the X, E, F are all natural numbers which are more than or equal to 1.
In one possible embodiment, obtaining left phase source data and right phase source data includes: obtaining the plurality of shadow nodes; obtaining a shadow node array according to the shadow nodes; obtaining a phase detection pixel array according to the shadow node array and the phase detection pixel; and obtaining the left phase source data and the right phase source data according to the phase detection pixel array.
In a feasible embodiment, the left phase source data is subjected to fusion calculation to obtain left phase fusion data, and the right phase source data is subjected to fusion calculation to obtain right phase fusion data; and performing phase parallax calculation on the left phase fusion data and the right phase fusion data to obtain the low-resolution parallax map.
In a possible embodiment, the performing the phase disparity calculation on the left phase fusion data and the right phase fusion data to obtain the low resolution disparity map includes: determining a parallax search range coefficient S; setting a filling coefficient to V based on the parallax search range coefficient S; performing edge filling calculation on the left phase fusion data according to the filling coefficient V to obtain left phase edge filling data, and performing edge filling calculation on the right phase fusion data according to the filling coefficient V to obtain right phase edge filling data; traversing the left phase edge filling data and the right phase edge filling data, and calculating phase disparity values corresponding to the left phase edge filling data and the right phase edge filling data to obtain the low-resolution disparity map.
In a possible embodiment, before traversing the left phase edge padding data and the right phase edge padding data, and calculating the phase disparity values corresponding to the left phase edge padding data and the right phase edge padding data to obtain the low resolution disparity map, the method further includes: selecting any data in the right phase edge filling data as a point to be calculated; selecting a contrast point in the left phase edge filling data according to the point to be calculated; extracting a right phase data vector from the right phase edge data based on the reference point and the fill factor V; extracting a left phase data vector based on the point to be calculated, the parallax search range coefficient S and the right phase data vector; based on the parallax search range coefficient S, taking the right phase data vector as a template, and performing sliding window operation on the left phase data vector to obtain more than two SAD values; and obtaining the minimum value of the more than two SAD values according to the more than two SAD values, wherein the minimum value is the phase parallax value corresponding to the point to be calculated and the comparison point.
In one possible embodiment, acquiring the high resolution guide map comprises: acquiring a first channel pixel of the image sensor, and acquiring a first channel fusion data array according to the first channel pixel, wherein the first channel pixel is a channel pixel with the largest number ratio in pixels of the image sensor except a W channel pixel; and fusing a data array according to the first channel to obtain the high-resolution guide map.
In one possible embodiment, obtaining a first channel pixel of the image sensor, and obtaining a first channel fused data array from the first channel pixel comprises: stacking the first channel pixels to form a first channel pixel matrix in the node of the image sensor; obtaining a first channel pixel array according to the first channel pixel array; and performing fusion calculation on every two longitudinal pixels in the first channel pixel array to obtain a first channel fusion data array.
In one possible embodiment, obtaining the high resolution guidance map from the first channel data array includes: dividing the first channel fusion data array into a plurality of first intervals; extracting a plurality of first channel data from the first interval; obtaining the high-resolution guide map according to the plurality of first channel data; the numbers are natural numbers greater than or equal to 1.
In a possible embodiment, the extracting the first channel data from the first interval refers to: and extracting a plurality of first channel data according to the positions of the phase detection pixels in the shadow nodes, wherein the plurality of first channel data are natural numbers which are more than or equal to 1.
In one possible embodiment, the obtaining a high resolution disparity map from the low resolution disparity map and the high resolution guide map comprises: and performing combined double-side sampling calculation on the low-resolution disparity map and the high-resolution guide map to obtain a high-resolution disparity map, wherein the resolution of the high-resolution disparity map is consistent with that of the high-resolution guide map.
In one possible embodiment, acquiring high resolution left phase detection data and high resolution right phase detection data from the left phase source data, the right phase source data and the high resolution disparity map comprises: acquiring first channel left fusion data and first channel right fusion data according to the first channel fusion data array; acquiring the high-resolution left phase detection data according to the left phase fusion data and the first channel left fusion data; and acquiring the high-resolution right phase detection data according to the right phase fusion data, the first channel right fusion data and the high-resolution disparity map.
Acquiring the first-channel left fusion data and the first-channel right fusion data according to the first-channel fusion data array comprises: acquiring first channel left split data and first channel right split data according to the first channel fusion data array; and obtaining first channel left fusion data according to the first channel left split data, and obtaining first channel right fusion data according to the first channel right split data.
Obtaining first channel left fusion data according to the first channel left split data, and obtaining first channel right fusion data according to the first channel right split data includes: dividing the first channel left split data into a plurality of second intervals, and dividing the first channel right split data into a plurality of third intervals; selecting a plurality of first channel left split data from the second interval, and selecting a plurality of first channel right split data from the third interval; and obtaining the first channel left fusion data according to the plurality of first channel left split data, and obtaining the first channel right fusion data according to the plurality of first channel right split data, wherein the plurality of natural numbers are more than or equal to 1.
In a possible embodiment, the selecting the first channel left split data from the second interval and the selecting the first channel right split data from the third interval refer to: and extracting the plurality of first channel left split data in a second interval according to the position of the phase detection pixel in the shadow node, and selecting the plurality of first channel right split data according to the position of the phase detection pixel in the shadow node, wherein the plurality of first channel left split data are natural numbers which are more than or equal to 1.
In one possible embodiment, obtaining the high resolution left phase detection data from the left phase fusion data and the first channel left fusion data refers to: and filling the left phase fusion data with the first channel left fusion data to obtain the high-resolution left phase detection data.
In one possible embodiment, the acquiring the high resolution right phase detection data from the right phase fusion data, the first channel right fusion data and the high resolution disparity map comprises: determining a phase pixel conversion coefficient; acquiring a parallax value of the high-resolution parallax map; and acquiring high-resolution right phase detection data based on the disparity value, the phase pixel conversion coefficient, the right phase fusion data and the first channel right fusion data.
In a second aspect, a focusing apparatus includes:
the acquisition module is used for acquiring left phase source data and right phase source data, acquiring a low-resolution disparity map according to the left phase source data and the right phase source data, acquiring a high-resolution guide map, and acquiring a high-resolution disparity map according to the low-resolution disparity map and the high-resolution guide map, wherein the resolution of the high-resolution disparity map is higher than that of the low-resolution disparity map;
the calculation module is used for calculating high-resolution left phase detection data and high-resolution right phase detection data according to the left phase source data, the right phase source data and the high-resolution disparity map; and
and the focusing module is used for realizing focusing based on the high-resolution left phase detection data and the high-resolution right phase detection data.
In a possible embodiment, after acquiring the left phase source data and the right phase source data, the acquiring module is further specifically configured to:
acquiring left phase fusion data according to the left phase source data;
and acquiring right phase fusion data according to the right phase source data.
In one possible embodiment, in obtaining the high resolution guidance map, the obtaining module is specifically configured to:
acquiring a first channel pixel on an image sensor, and acquiring a first channel fusion data array according to the first channel pixel, wherein the first channel pixel is a channel pixel with the largest number ratio in pixels of the image sensor except a W channel pixel;
and acquiring the high-resolution guide map according to the first channel fusion data array.
In a possible embodiment, after acquiring the first channel fused data array, the acquisition module is further configured to: and acquiring first channel left fusion data and first channel right fusion data based on the first channel fusion data array.
In a possible embodiment, in the aspect of calculating the high resolution left phase detection data and the high resolution right phase detection data according to the left phase source data, the right phase source data and the high resolution disparity map, the calculating module is specifically configured to:
calculating to obtain high-resolution left phase detection data according to the left phase fusion data and the first channel left fusion data;
and calculating high-resolution right phase detection data according to the right phase fusion data, the first channel right fusion data and the high-resolution disparity map.
In a third aspect, an electronic device comprises a processor, a memory and a program stored on the memory and executable on the processor, the program, when executed by the processor, implementing the steps of the method as in the first aspect.
In a fourth aspect, a storage medium, on which a program is stored, which program, when executed by a processor, carries out the steps of the method as in the first aspect.
In the embodiment of the application, the phase detection pixel array is obtained by obtaining the shadow node on the image sensor, the left phase source data and the right phase source data are obtained through the phase detection pixel array, and the low-resolution disparity map is obtained based on the left phase source data and the right phase source data. The method comprises the steps of obtaining first channel pixels of an image sensor, obtaining a first channel fusion data array according to the first channel pixels, and obtaining a high-resolution guide map based on the first channel fusion data array. And calculating the first channel fusion data array to obtain first channel left fusion data and first channel right fusion data. And calculating the first channel low-resolution disparity map and the high-resolution guide map to obtain a high-resolution disparity map, and acquiring high-resolution left phase detection data and high-resolution right phase detection data based on the high-resolution disparity map, the first channel left fusion data, the first channel right fusion data, the left phase source data and the right phase source data. Focusing is achieved based on the high-resolution left phase detection data and the high-resolution right phase detection data. Therefore, by introducing the high-resolution disparity map, interpolation reconstruction is carried out by combining the first channel left fusion data and the first channel right fusion data, the high-resolution phase detection data are restored on the low-density phase detection pixel image sensor by a high-resolution reconstruction method, and the phase difference is calculated by the reconstructed high-resolution phase detection data, so that the defect of low accuracy of the calculation result of the original low-resolution phase detection data is overcome, the density of the phase detection pixels is improved, the cost is saved on hardware, the imaging quality is ensured, and the imaging efficiency and the accuracy of the image sensor are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a focusing method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a process for acquiring left phase source data and right phase source data according to an embodiment of the present application;
fig. 3 is a schematic diagram of a pixel distribution of an image sensor according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating a pixel distribution of a shadow node of an image sensor according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a shadow node array and a phase detection pixel array according to an embodiment of the present application;
FIG. 6 is a schematic diagram of left phase source data and right phase source data provided by an embodiment of the present application;
fig. 7 is a schematic flowchart of generating a low-resolution disparity map according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of phase fusion calculation according to an embodiment of the present application;
fig. 9 is a schematic diagram of left phase fusion data and right phase fusion data provided in an embodiment of the present application;
fig. 10 is a schematic flowchart of a phase disparity calculation according to an embodiment of the present disclosure;
fig. 11 is a schematic diagram of acquiring left phase edge padding data and right phase edge padding data according to an embodiment of the present application;
fig. 12 is a schematic diagram of extracting a right phase data vector according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a method for extracting left phase data vectors according to an embodiment of the present application;
fig. 14 is a schematic diagram of calculating a SAD value according to an embodiment of the present application;
fig. 15 is a schematic diagram of a low-resolution disparity map according to an embodiment of the present disclosure;
fig. 16 is a schematic flow chart of generating a high-resolution guidance diagram according to an embodiment of the present application;
fig. 17 is a schematic diagram of a first channel pixel matrix illustrating a first channel pixel distribution according to an embodiment of the present disclosure;
FIG. 18 is a schematic diagram of a first channel pixel array and a first channel fused data array according to an embodiment of the present disclosure;
FIG. 19 is a schematic diagram of a high resolution guide map provided by an embodiment of the present application;
fig. 20 is a schematic diagram of generating a high resolution disparity map according to an embodiment of the present application;
fig. 21 is a schematic flowchart of a process for acquiring high-resolution left-phase detection data and high-resolution right-phase detection data according to an embodiment of the present disclosure;
fig. 22 is a schematic diagram of first channel left split data and first channel right split data according to an embodiment of the present application;
fig. 23 is a schematic diagram of an operation of extracting first channel left split data and first channel right split data according to an embodiment of the present application;
FIG. 24 is a schematic diagram illustrating a method for obtaining high resolution left phase detection data according to an embodiment of the present application;
FIG. 25 is a schematic diagram illustrating a method for obtaining high resolution right phase detection data according to an embodiment of the present application;
FIG. 26 is a focusing device according to an embodiment of the present disclosure;
fig. 27 is an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present application.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In this application, the directional terms "upper", "lower", "front", "rear", and the like are defined with respect to the schematically-disposed orientation of the components in the drawings, and it is to be understood that these directional terms are relative concepts that are used for descriptive and clarity purposes and that will vary accordingly depending on the orientation in which the components are disposed in the drawings.
In addition, unless a specified order is explicitly stated in the context of the present application, the process steps described herein may be performed in a different order than specified, i.e., each step may be performed in the specified order, substantially simultaneously, in a reverse order, or in a different order.
Technical terms mentioned in the embodiments of the present application will be exemplified below:
phase detection pixel: the phase data is used for calculating and outputting the phase data, and the automatic focusing can be carried out through the phase data output by the phase detection pixels. The phase detection pixel cannot directly participate in imaging, and the pixel value of the occupied position of the phase detection pixel needs to be calculated through the adjacent imaging pixel value.
OCL: ON CHIP LENS, which refers to a micro-LENS ON a photosensitive element, is used to focus the external light onto the pixels of the photosensitive element to increase the light sensitivity. The 2 × 2OCL means that adjacent four pixels share one microlens.
The embodiment of the application provides a focusing method in a first aspect.
Fig. 1 is a schematic flowchart of a focusing method according to an embodiment of the present disclosure. As shown, the method may include steps S101 to S106 described below.
S101: left and right phase source data are acquired.
In this embodiment, the shadow nodes of the image sensor have phase detection pixels therein, and the phase detection pixels are 2 × 2OCL, so that the left phase source data and the right phase source data can be obtained by extracting the phase detection pixels in all the shadow nodes.
S102: and obtaining a low-resolution disparity map according to the left phase source data and the right phase source data.
In this embodiment, the low-resolution disparity map can be obtained by performing fusion calculation and phase disparity calculation on the left phase source data and the right phase source data, and the embodiment of the present application is not limited to a specific algorithm.
S103: a high resolution guide map is acquired.
In this embodiment, all the first channel pixels are extracted, and a high resolution guide map is generated based on the first channel pixels, the resolution of the high resolution guide map being greater than the low resolution disparity map.
S104: and acquiring a high-resolution disparity map according to the low-resolution disparity map and the high-resolution guide map, wherein the resolution of the high-resolution disparity map is higher than that of the low-resolution disparity map.
In this embodiment, the low-resolution disparity map and the high-resolution guide map are calculated to obtain the high-resolution disparity map.
S105: and calculating high-resolution left phase detection data and high-resolution right phase detection data according to the left phase source data, the right phase source data and the high-resolution disparity map.
S106: focusing is achieved based on the high-resolution left phase detection data and the high-resolution right phase detection data.
Fig. 2 is a schematic flowchart of a process for acquiring left phase source data and right phase source data according to an embodiment of the present disclosure. As shown, the method may include steps S1011 to S1014 described below.
S1011: acquiring a plurality of shadow nodes;
in this embodiment, the image sensor is divided into a plurality of nodes according to the phase detection pixels distributed on the image sensor, wherein the nodes including the phase detection pixels are shadow nodes.
S1012: obtaining a shadow node array according to a plurality of shadow nodes;
in this embodiment, all shadow nodes are extracted, resulting in an array of shadow nodes.
S1013: obtaining a phase detection pixel array according to the shadow node array and the phase detection pixels;
in this embodiment, all the phase detection pixels in the shadow node array are extracted, resulting in a phase detection pixel array.
S1014: and obtaining left phase source data and right phase source data according to the phase detection pixel array.
The method for acquiring the left phase source data and the right phase source data is specifically described below with reference to fig. 3 to 6.
Fig. 3 is a schematic diagram of a pixel distribution of an image sensor according to an embodiment of the present disclosure. As shown in the figure, the pixel array of the image sensor 30 includes a pixel number of a × B, in which the phase detection pixels 41 are distributed, the shadow nodes 31 including the phase detection pixels 41 are arranged according to the positions of the phase detection pixels 41, and the shadow nodes 31 are a pixel array including a pixel number of E × F. In the image sensor 30 of the present embodiment, all of the blank nodes 32 are imaging pixels, the number of pixels included therein is E × F, which is the same as the number of pixels in the shaded node 31, X phase detection pixels 41 are distributed in the shaded node, and the remaining pixels except for the phase detection pixels 41 are imaging pixels. A. B, E, F are all natural numbers greater than or equal to 1, and the value of A, B, E, F is not specifically limited in the embodiments of the present application.
In this embodiment, the number of nodes in the lateral direction of the image sensor 30 is
Figure BDA0003273154090000101
The number of nodes in the longitudinal direction is
Figure BDA0003273154090000102
In this embodiment, the number of shadow nodes 31 in the lateral direction of the image sensor 30 is proportional to the total number of nodes in the lateral direction of the image sensor 30
Figure BDA0003273154090000103
The number of the shadow nodes 31 in the longitudinal direction of the image sensor 30 is in proportion to the total number of the nodes in the longitudinal direction of the image sensor 30
Figure BDA0003273154090000104
M, N are all natural numbers greater than or equal to 1, and the value of M, N is not specifically limited in the embodiments of the present application.
Fig. 4 is a schematic pixel distribution diagram of a shadow node of an image sensor according to an embodiment of the present disclosure. As shown, there are phase detection pixels 41 in the shaded node 31, and the phase detection pixels 41 are below the microlenses, which in this embodiment are 2 × 2OCL, i.e. there are 4 phase detection pixels 41 below one microlens. The 2 × 2OCL can simultaneously realize phase detection in multiple directions such as horizontal and vertical directions, and improve phase focusing performance. However, it should be understood that the present application is not limited to the structure of the microlens, and the microlens such as 2 × 1OCL can also achieve the object of the present invention.
In this embodiment, there are X phase detection pixels 41 distributed in each shadow node 31. In this embodiment, the ratio of the number of phase detection pixels 31 to the total number of pixels of the image sensor 30 is
Figure BDA0003273154090000105
X is a natural number greater than or equal to 1, and the value of X is not particularly limited in the embodiments of the present application.
Fig. 5 is a schematic diagram of a shadow node array and a phase detection pixel array according to an embodiment of the present application. Describing FIG. 5 in conjunction with FIG. 3, extracting all of the shadow nodes 31, as shown in FIG. 3, can constitute the shadow node array 51 shown in FIG. 5. As previously mentioned, the number of nodes in the lateral direction of the image sensor 30 is
Figure BDA0003273154090000106
The number of nodes in the longitudinal direction is
Figure BDA0003273154090000107
The number of shadow nodes 31 in the lateral direction of the image sensor 30 is in proportion to the total number of nodes in the lateral direction of the image sensor 30
Figure BDA0003273154090000108
The number of the shadow nodes 31 in the longitudinal direction of the image sensor 30 is in proportion to the total number of the nodes in the longitudinal direction of the image sensor 30
Figure BDA0003273154090000109
In summary, the total number of nodes in the shadow node array 41 can be obtained as
Figure BDA00032731540900001010
Fig. 6 is a schematic diagram illustrating left phase source data and right phase source data provided in an embodiment of the present application. In this embodiment, X phase detection pixels 31 are distributed in each shadow node 31, and the phase detection pixels 41 in the shadow nodes 31 can be extracted to form a pixel having the number of pixels of
Figure BDA00032731540900001011
The phase detection pixel array 52. The above operations are repeated to extract the data values represented by the phase detection pixels 31 in all the shaded nodes 31, and finally the left phase source data 61 and the right phase source data 62 shown in fig. 6 are obtained. Wherein the total amount of data contained in the left phase source data 61 and the right phase source data 62 are both
Figure BDA0003273154090000111
Fig. 7 is a schematic flowchart illustrating a process of generating a low-resolution disparity map according to an embodiment of the present disclosure. As shown, the method may include steps S1021 through S1022 described below.
S1021: performing fusion calculation on the left phase source data to obtain left phase fusion data, and performing fusion calculation on the right phase source data to obtain right phase fusion data;
s1022: and performing phase parallax calculation on the left phase fusion data and the right phase fusion data to obtain a low-resolution parallax image.
The method for obtaining the low-resolution disparity map is specifically described below with reference to fig. 8 to 13.
Fig. 8 is a schematic diagram of phase fusion calculation. Fig. 9 is a schematic diagram showing left phase fused data and right phase fused data. As shown in the figure, the left phase source data 61 and the right phase source data 62 are respectively subjected to the longitudinal 2-in-1 fusion calculation as shown in fig. 8, that is, the fusion calculation is performed between every two longitudinal data of the left phase source data 61, and the fusion calculation is performed between every two longitudinal data of the right phase source data 62, so as to obtain the data volume of
Figure BDA0003273154090000112
Left phase fused data 91 and data volume of
Figure BDA0003273154090000113
Right phase fused data 92. The fusion calculation can reduce noise, improve the signal-to-noise ratio, eliminate the photosensitive difference, reduce the data volume so as to reduce the operation burden, accelerate the response and further improve the focusing sensitivity.
Fig. 10 is a schematic flowchart of phase disparity calculation according to an embodiment of the present application. As shown, the method may include steps S10221 to S10230 described below.
S10221: the parallax search range coefficient S is determined, and the size of the parallax search range coefficient S is related to the setting of the image sensor 30, in the embodiment of the present application, the size of the parallax search range coefficient S is taken as 3 as an example, and it should be understood that the value of the parallax search range coefficient S is not limited to 3.
S10222: setting a fill factor to V based on the disparity search range coefficient S. The setting of V should be based on the disparity search range coefficient S, because the magnitude of V determines the magnitude of the right phase data vector 121 (template), and the magnitude of S affects the subsequent sliding window operation, and thus the left phase data vector 131. The filling coefficient V is not selected too much, and the too large filling coefficient V can increase the calculation amount, reduce the calculation efficiency and further reduce the focusing sensitivity; the filling coefficient is not selected to be too small, and the template is reduced due to the too small filling coefficient V, so that the calculation precision is reduced, and the focusing precision is further reduced. It should be understood that the size of V is not specifically limited in the embodiments of the present application.
S10223: performing edge filling calculation on the left phase fusion data according to the filling coefficient V to obtain left phase edge filling data, and performing edge filling calculation on the right phase fusion data according to the filling coefficient V to obtain right phase edge filling data;
s10224: selecting any data in the right phase edge filling data as a point to be calculated;
s10225: selecting a contrast point in the left phase edge filling data according to the point to be calculated;
s10226: extracting a right phase data vector from the right phase edge data based on the reference point and the filling factor V;
s10227: extracting a left phase data vector based on the point to be calculated, the parallax search range coefficient S and the filling coefficient V;
s10228: based on the parallax search range S, taking the right phase data vector as a template, performing sliding window operation on the left phase data vector, and calculating more than two SAD values;
s10229: calculating to obtain minimum values of more than two SAD values, wherein the minimum values are phase disparity values corresponding to the points to be calculated and the comparison points;
s10230: and traversing the left phase edge filling data and the right phase edge filling data, and calculating phase parallax values corresponding to the left phase edge filling data and the right phase edge filling data to obtain a low-resolution parallax map.
Fig. 11 is a schematic diagram of acquiring left phase edge padding data and right phase edge padding data according to an embodiment of the present application. As shown in the figure, the filling factor is set to V, and in the present embodiment, taking V ═ 5 as an example, edge filling calculation is performed on the left phase fusion data 91 and the right phase fusion data 92.
The edge fill calculation includes various calculation methods, such as edge fill Constant calculation (Border Constant), edge fill replication calculation (Border replace), edge fill mirror calculation (Border refer), and the like. The embodiment of the present application takes edge-filling mirror image calculation (Border reflection) as an example.
As shown in the figure, the leftmost column of data of the left-phase fusion data 91 is taken as a symmetry axis, and the V-column of data on the right of the leftmost column of data is taken as a mirror image, and similarly, the rightmost column of data of the left-phase fusion data 91 is taken as a symmetry axis for calculation, so as to obtain the left-phase edge padding data 111.
Similarly, the right phase fused data 92 is processed in the same manner as the left phase fused data 91, and the right phase edge padding data 112 is obtained. The edge filling calculation is to calculate the corresponding phase disparity value from the data of the left and right edges of the left and right phase fused data 91 and 92.
Fig. 12 is a schematic diagram of extracting a right-phase data vector according to an embodiment of the present application. As shown in the figure, any data in the right phase edge padding data 112 is selected as the point to be calculated, in this embodiment, the point (0, 0) is used as the point to be calculated, and the length of the extracted right phase data vector 121 is 2 × V +1 ═ 11.
Fig. 13 is a schematic diagram of extracting a left phase data vector according to an embodiment of the present application. As shown in the figure, any data in the left phase edge padding data 111 is selected as the comparison point, and in the present embodiment, the length of the extracted left phase data vector 131 is 2 × V +1-2 × S ═ 5 with the (0, 0) point as the comparison point.
Fig. 14 is a schematic diagram of calculating a SAD value according to an embodiment of the present application. As shown in the figure, the right phase data vector 121 is used as a template (i.e., the right phase data vector 121 is used as a reference), the left phase data vector 131 is subjected to a sliding window operation, the window length is equal to 5 as the length of the left phase data vector 131, and the SAD value in the interval of-3 to +3 is calculated, i.e., the L1 distance (manhattan distance) between the right phase data vector 121 and the left phase data vector 131 is calculated. In the embodiment of the present application, since the size of the parallax search range coefficient S is 3, 7 SAD values can be calculated.
Fitting the obtained 7 SAD values into a quadratic curve, and obtaining a minimum SAD value through an extremum solving method, wherein the minimum SAD value is the phase disparity value corresponding to the (0, 0) point. The disparity search range coefficient S is typically such that in the interval [ -S, + S ], the quadratic curve fitted by the SAD value has one and only one peak.
The above calculation is repeated by traversing the left phase edge fill data 111 and the right phase edge fill data 112, resulting in the low resolution disparity map 151 shown in fig. 15. The low resolution disparity map 151 contains data of the amount
Figure BDA0003273154090000131
The embodiment of the present application does not limit the specific algorithm for calculating the phase disparity, and any algorithm capable of achieving the phase disparity calculation effect falls within the scope of the present application.
Fig. 16 is a schematic flowchart illustrating a process of generating a high-resolution guidance diagram according to an embodiment of the present application. As shown, the method may include steps S1031 to S1036 described below.
S1031: stacking the first channel pixels to form a first channel pixel matrix in all nodes of the image sensor 30;
s1032: obtaining a first channel pixel array according to the first channel pixel array;
s1033: performing fusion calculation on every two longitudinal pixels in the first channel pixel array to obtain a first channel fusion data array;
s1034: dividing the first channel data fusion array into a plurality of first intervals;
s1035: extracting a plurality of first channel data from the first interval;
s1036: and obtaining a high-resolution guide map according to the data of the plurality of first channels.
The method of obtaining the high-resolution guide map is specifically described below with reference to fig. 17 to 19.
Fig. 17 is a schematic diagram of a first channel pixel matrix illustrating a first channel pixel distribution according to an embodiment of the present disclosure. As shown, first channel pixels 171 are distributed in the nodes (including the shaded node 31 and the blank node 32), and the first channel pixels 171 are stacked to the left to form a first channel pixel array 172.
In this embodiment, since the G-channel pixel has a large occupation ratio (larger than that of the R-channel pixel and the B-channel pixel) in the pixels of the image sensor 30. Although the W-channel pixel ratio is also large, overexposure is easily caused, so that a G-channel pixel is selected as the first-channel pixel. The proportion of G-channel pixels on each row in the node is
Figure BDA0003273154090000141
On each row there is
Figure BDA0003273154090000142
G-channel pixels formed by stacking to the left
Figure BDA0003273154090000143
A first channel pixel matrix 172 of pixels. Y is a natural number greater than or equal to 1 and smaller than E, and the value of Y is not particularly limited in the embodiment of the present application.
According to the method of this embodiment, the first channel pixels 171 in all the nodes (i.e., the shadow nodes 31 and the blank nodes 32) in the image sensor 30 are extracted.
Fig. 18 is a schematic diagram of a first channel pixel array and a schematic diagram of a first channel fused data array according to an embodiment of the present application. As shown, the first channel pixel array 181 includes pixels of the number
Figure BDA0003273154090000144
Performing fusion calculation on every two longitudinal pixels in the first channel pixel array 181 to obtain a first channel fusion data array 182, wherein the number of pixels included in the first channel fusion data array is
Figure BDA0003273154090000145
The fusion calculation can reduce noise, improve signal-to-noise ratio, eliminate sensitization difference, and reduce data volume to reduce operation loadThe response is accelerated, and the focusing sensitivity is further accelerated. It should be understood that the present application is not limited to the specific method of fusion calculation, and any method capable of achieving the above effects falls within the scope of the present application.
Fig. 19 is a schematic diagram illustrating a high resolution guide map according to an embodiment of the present application. As shown, the first lane fused data array 182 is divided into a number of data-containing arrays
Figure BDA0003273154090000146
The first section 191 is selected from the first section 191
Figure BDA0003273154090000147
The first channel data 192 (shaded portion in FIG. 19) will be
Figure BDA0003273154090000148
A first channel data 192 arranged as
Figure BDA0003273154090000149
Finally, a high-resolution guide map 193 is obtained, and the high-resolution guide map 193 contains the first channel data 192 with the quantity being
Figure BDA00032731540900001410
In this embodiment, the positions of the first channel data 192 are in accordance with the distribution of the phase detection pixels 41 in the shadow nodes 31. (referring to fig. 4, the phase detection pixels 41 are arranged in a direction parallel to the diagonal of the shadow node 31), the phase detection effect can be improved, and further, the focusing sensitivity and precision can be improved. It should be understood that the above-described embodiment is a preferred embodiment, and the first channel data 192 is selected at other positions, and the object of the present invention can still be achieved. The present invention does not limit the selection of the location of the first channel data 192.
Fig. 20 is a schematic diagram illustrating a generation of a high-resolution disparity map according to an embodiment of the present application. As shown in the figure, a spatial domain gaussian convolution kernel 201 is taken on the low-resolution disparity map 151, a value domain gaussian convolution kernel 202 is taken on the high-resolution guide map 193, that is, in this embodiment, the size of the upsampling convolution kernel is 1 × 2, weights of convolution are obtained by sliding of convolution kernel windows on the low-resolution disparity map 151 and the high-resolution guide map 193, and an algorithm based on Joint double-side sampling (Joint double upsampling) obtains the high-resolution disparity map 203 with the resolution consistent with that of the high-resolution guide map 193. It should be understood that any algorithm that can achieve the effect of the embodiment of the present application, that is, an algorithm that the resolution of the obtained high-resolution disparity map 203 is improved compared with that of the low-resolution disparity map 151, such as deep learning, machine learning, and the like, falls within the scope of the present invention.
In one possible embodiment, the low resolution disparity map contains a data amount in the longitudinal direction of
Figure BDA0003273154090000151
The amount of data contained in the transverse direction is
Figure BDA0003273154090000152
The high-resolution guide map 193 contains data of the amount of data in the longitudinal direction
Figure BDA0003273154090000153
The amount of data contained in the transverse direction is
Figure BDA0003273154090000154
Taking the value of M as 2 and the value of N as 3 as an example, the ratio of the number of the shadow nodes 31 in the longitudinal direction of the image sensor 30 is
Figure BDA0003273154090000155
The number ratio in the transverse direction is
Figure BDA0003273154090000156
Therefore, the high-resolution guide map 193 has the data amount in the longitudinal direction that matches the data amount contained in the low-resolution parallax map, and the data amount in the lateral direction is 3 times as large as the low-resolution parallax map, that is, the resolution of the high-resolution guide map 193 is higher than that of the low-resolution parallax map 151.Based on the algorithm of this embodiment, the resolution of the obtained high-resolution disparity map 203 is improved by 3 times compared with the low-resolution disparity map 151. The present embodiment does not limit the specific values of M and N. It should be understood that the values of M and N depend on the distribution of the phase detection pixels 41 in the image sensor 30. The embodiment of the present application does not limit a specific algorithm, and any algorithm having the same technical effect as Joint bilateral upsampling (Joint bilateral upsampling) falls within the protection scope of the present application. It should be understood that, in the present embodiment, the resolution refers to the number of data included in the high-resolution guide map 193 and the high-resolution parallax map 203.
Fig. 21 is a schematic diagram of acquiring high-resolution left-phase detection data and high-resolution right-phase detection data according to an embodiment of the present disclosure. As shown, the method may include steps S1051 through S1056 described below.
S1051: acquiring first channel left split data and first channel right split data according to the first channel fusion data array;
in this embodiment, the first channel left split data and the first channel right split data can be obtained by performing split calculation on the first channel fused data array.
S1052: dividing the left split data of the first channel into a plurality of second intervals, and dividing the right split data of the first channel into a plurality of third intervals;
s1053: selecting a plurality of first channel left split data from the second interval, and selecting a plurality of first channel right split data from the third interval;
s1054: and obtaining first channel left fusion data according to the plurality of first channel left split data, and obtaining first channel right fusion data according to the plurality of first channel right split data.
S1055: acquiring high-resolution left phase detection data according to the left phase fusion data and the first channel left fusion data;
s1056: and acquiring high-resolution right phase detection data according to the right phase fusion data, the first channel right fusion data and the high-resolution disparity map.
The following specifically describes the acquisition method of the high-resolution left phase detection data and the high-resolution right phase detection data with reference to fig. 22 to 25.
Fig. 22 is a schematic diagram of first-channel left split data and first-channel right split data according to an embodiment of the present application. As shown, the first channel fused data array 182 is subjected to splitting calculation to obtain the first channel left split data 221 and the first channel right split data 222.
Fig. 23 is a schematic diagram of an operation of extracting first channel left split data and first channel right split data according to an embodiment of the present application. As shown in the figure, the first channel left split data 221 is divided into a second section 231, and a plurality of first channel left split data 233 are extracted from the second section 231; the first channel right split data 222 is divided into third intervals 232, and a plurality of first channel right split data 234 are extracted from the third intervals, so as to obtain first channel left fusion data 241 shown in fig. 24 and first channel right fusion data 251 shown in fig. 25.
In this embodiment, the positions of several first-channel left split data 233 selected from the second section 231 refer to the distribution of the phase detection pixels 41 in the shaded node 31. (referring to fig. 4, the phase detection pixels 41 are arranged in a direction parallel to the diagonal of the shadow node 31), the phase detection effect can be improved, i.e., the focusing speed and accuracy are improved. It should be understood that the above embodiment is a preferred embodiment, and the other positions select several first channel left split data 233, and the object of the present invention can still be achieved. This embodiment does not define the selected locations of the number of first channel left split data 233. The same applies to the selection of the first channel right split data 234.
Fig. 24 is a schematic diagram of a method for obtaining high-resolution left-phase detection data according to an embodiment of the present application. As shown in the drawing, in this embodiment, data 2411 filling the shaded portion of T columns after each column 911 of the left phase fusion data 91 can obtain the amount of lateral data of
Figure BDA0003273154090000171
Longitudinal data volume of
Figure BDA0003273154090000172
In the high-resolution left phase detection data 242, the data amount of the high-resolution left phase detection data 242 is improved relative to the data amount of the left phase fusion data 91, that is, the resolution is improved.
It should be understood that in this embodiment, the last column of data of the left phase fusion data 91 is not filled in to ensure the correct execution of the algorithm, so as to improve the accuracy and correctness of the execution of the algorithm. The present embodiment does not limit the specific method of filling.
The following description will be given by taking T ═ 2 as an example. Filling the left phase fused data 91 with the first channel left fused data 241 results in high resolution left phase detected data 242. In this embodiment, the data 2411 of the shaded portion of the first channel left fused data 241 is filled in behind each column 911 of the left phase fused data 91, and the data of the 2 nd, 3 rd, 5 th, 6 th, 8 th, 9 th columns · · · · · · · · · · · of the first channel left fused data 241 is filled in turn behind the 1 st, 2 nd, 3 rd columns · · · · · · · · · · · · · · · · · · · · · · · · · · ·, and so on, resulting in the high-resolution left phase detection data 242. That is, in this embodiment, the high resolution left phase detection data 242 is extended by nearly 3 times the resolution of the left phase fused data 91. It should be understood that the present application does not limit the number of columns for filling the left phase fusion data 91 from the first channel left fusion data 241, nor the specific location of filling, as long as the resolution of the finally obtained high-resolution left phase detection data 242 is greater than that of the left phase fusion data 91, and actually, the filling manner may be adjusted according to the multiple requirement for enlarging the data amount of the left phase fusion data 91, that is, the present application embodiment does not limit the specific filling manner. It should be understood that the embodiment of the present invention is not limited to a specific method, and if the calculation method such as interpolation can achieve the effect of the present invention, that is, the resolution of the high-resolution left phase detection data 242 is improved compared with the left phase fusion data 91, all of which fall within the protection scope of the present invention.
In this embodiment, the left phase fusion data 91 is reconstructed by interpolation through the first channel left fusion data 241, so that data reconstruction in which the density of the phase detection pixels 41 reaches a higher density is realized, the cost is saved in hardware, and the imaging efficiency and the focusing accuracy of the image sensor 30 are improved.
Fig. 25 is a schematic diagram of a method for obtaining high-resolution right-phase detection data according to an embodiment of the present application. As shown, the high resolution right phase detection data 252 is collectively obtained by the right phase fusion data 92, the first channel right fusion data 251, and the high resolution disparity map 203. For example, to obtain the right fusion data corresponding to the first channel, which should be filled by the high-resolution right phase detection data 252 in the data 2521 in the first row and the second column, the specific algorithm is as follows: determining the disparity as D according to the data 2031 of the first row and the second column of the high resolution disparity map 20312The selection of the data position corresponds to the data 2521 in the first row and the second column of the high resolution right detection data 252; according to the phase detection pixel layout, the data 2511 of the first row and the second column of the first channel right fused data 251 is used as a starting point to horizontally translate D12The data corresponding to the xh units is the data to be filled in the position of the data 2521. H is a phase pixel conversion coefficient, and so on, all data except the right phase fusion data 2522 in the high-resolution right detection data 252 are calculated, and finally the high-resolution right phase detection data 252 is obtained.
The phase pixel conversion factor H is determined by the operating mode of the image sensor 30 and may be determined by itself before the algorithm is executed. Commonly, the image sensor 30 has a full-size operation mode, an operation mode of H1V2 Binning, an operation mode of H2V2 Binning, and the like.
In one embodiment, in the full-scale operation mode, the pixels on the image sensor 30 are not combined laterally, and the phase-pixel conversion coefficient H is 16, i.e. it represents a difference of 16 pixel units between one pixel parallax.
In another embodiment, in the H1V2 Binning mode, i.e. the pixels on the image sensor 30 do not perform the merging operation in the horizontal direction, and perform the pixel merging operation in the vertical direction, when the phase pixel conversion coefficient H is 16, i.e. it represents that there is a difference of 16 pixel units between the parallaxes of one pixel.
In another embodiment, in the H2V2 Binning operation mode, i.e. when pixels on the image sensor 30 are subjected to one merging calculation in both the horizontal direction and the vertical direction, the phase pixel conversion coefficient H is 8, i.e. it represents that there is 8 pixel unit difference between parallaxes of one pixel. It should be understood that the choice of the phase pixel conversion coefficient H depends on whether the Binning calculation (Binning) is performed in the horizontal direction or not, which depends on the operation mode of the image sensor 30, but should not be construed as a specific limitation to the embodiments of the present invention.
In one embodiment, the pixel disparity D12If the value of (D) is positive, then the horizontal right shift is D from the data 2511 in the first row and the second column of the first channel right fused data 25112The data corresponding to the xh units is the data to be filled in the position of the data 2521.
In another embodiment, the pixel disparity D12If the value of (D) is negative, the horizontal left-shift is performed by D, starting from the data 2511 in the first row and the second column of the first channel right fused data 25112The data corresponding to the xh units is the data to be filled in the position of the data 2521.
In another embodiment, the pixel disparity D12If the value of (1) is zero, the horizontal translation calculation is not needed, and the data 2511 in the first row and the second column of the first channel right fused data 251 is filled in the position of the data 2521.
Illustratively, if the data 2523 to be filled in the high resolution right detection data 252 is calculated, the data corresponding to the first channel right fusion data 251 is horizontally translated by DpqIf the corresponding data after the × H units (p, q are coordinate values of the data 2512, and are all natural numbers greater than or equal to 1) exceeds the range of the first right channel fusion data 251, the corresponding data of the first right channel fusion data 251 corresponding to the data 2523 to be filled may be filled.
It should be understood that one of the methods of the filling calculation is interpolation calculation, but the embodiment of the present application is not limited to a specific method, and other calculation methods than the filling calculation method can achieve the effect of the present application, that is, the resolution of the high-resolution right phase detection data 252 is improved compared with the resolution of the right phase fusion data 92, and all the methods fall within the scope of the present invention.
Through the calculation of the present embodiment, phase difference information can be given to the interpolation data of the high-resolution left phase detection data 242 and the high-resolution right phase detection data 252 to realize the subsequent focusing calculation.
Focusing may be achieved based on the high resolution left phase detection data 242 and the high resolution right phase detection data 252.
In this embodiment, the resolution of the high-resolution disparity map 203 and the resolution of the first-channel right fused data 251 are both greater than the resolution of the right phase fused data 92, and the right phase fused data 92 is subjected to difference reconstruction by the first-channel right fused data 251, so that interpolation reconstruction is achieved in which the density of the phase detection pixels 41 reaches a higher density, the cost is saved in hardware, and the imaging efficiency and the focusing accuracy of the image sensor 30 are improved.
Fig. 26 is a focusing device according to an embodiment of the present application, where the focusing device includes: an acquisition module 261, a calculation module 262, and a focusing module 263, wherein:
the obtaining module 261 is configured to obtain left phase source data and right phase source data, obtain a low-resolution disparity map according to the left phase source data and the right phase source data, obtain a high-resolution guide map, and obtain a high-resolution disparity map according to the low-resolution disparity map and the high-resolution guide map, where a resolution of the high-resolution disparity map is higher than a resolution of the low-resolution disparity map.
Further, in terms of acquiring the high-resolution guidance map, the acquisition module 261 is specifically configured to: acquiring a first channel pixel on the image sensor 30, and acquiring a first channel fusion data array according to the first channel pixel, wherein the first channel pixel is a channel pixel with the largest number ratio in pixels of the image sensor except for a W channel pixel; and acquiring the high-resolution guide map according to the first channel fusion data array.
Further, after acquiring the first channel fused data array, the acquiring module 261 is further configured to: and acquiring first channel left fusion data and first channel right fusion data based on the first channel fusion data array.
The calculating module 262 is configured to calculate high resolution left phase detection data and high resolution right phase detection data according to the left phase source data, the right phase source data, and the high resolution disparity map.
Further, in terms of obtaining the high-resolution left phase detection data and the high-resolution right phase detection data by calculating according to the left phase source data, the right phase source data, and the high-resolution disparity map, the calculating module 262 is specifically configured to: calculating to obtain high-resolution left phase detection data according to the left phase fusion data and the first channel left fusion data; and calculating to obtain high-resolution right phase detection data according to the right phase fusion data, the first channel right fusion data and the high-resolution disparity map.
The focusing module 263 achieves focusing based on the high-resolution left phase detection data and the high-resolution right phase detection data.
The focusing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. Illustratively, the mobile electronic device may be a camera, a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an Ultra Mobile Personal Computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The focusing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an iOS operating system, or another possible operating system, and the embodiments of the present application do not specifically limit the focusing apparatus provided in the embodiments of the present application to be capable of implementing each process implemented by the foregoing method embodiments, and in order to avoid repetition, details are not repeated here.
Fig. 27 is an electronic device according to an embodiment of the present application, and as shown in the drawing, an electronic device according to an embodiment of the present application further includes a memory 271, a processor 272, and a program or an instruction stored in the memory 271 and executable on the processor 272, where the program or the instruction implements each process of the above-mentioned focusing method embodiment when executed by the processor 272, and can achieve the same technical effect, and details are not repeated here to avoid repetition. It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Specific numerical values are substituted below to explain the implementation of the present application. The number of pixels above the image sensor 30 is 8000 × 6000, and the number of pixels in the nodes (including the shadow nodes and the blank nodes) of the image sensor 30 is 16 × 16. The number of nodes in the lateral direction of the image sensor 30 is 500, and the number of nodes in the longitudinal direction is 375. The shadow nodes are laterally scaled by
Figure BDA0003273154090000211
In the longitudinal direction having a ratio of
Figure BDA0003273154090000212
Thus, the image sensor 30 has 166 shadow nodes in the lateral direction and 188 shadow nodes in the longitudinal direction.
As shown in fig. 4, there are R, G, B, W four filter elements in the shaded node 31, and the proportion of the number of pixels is constrained to W: G: R: B: 4:2:1: 1. The minimum repeating unit of the pixel array on the image sensor 30 is 8 × 8, the minimum repeating unit is divided into four sub-units in average, the first diagonal (diagonal from the upper right corner to the lower left corner) pixels in each sub-unit are W pixels, the second diagonal (diagonal from the upper left corner to the lower right corner) pixels are one of R, G, B pixels, the four sub-units of the minimum repeating unit have the upper left corner passing through a B/W waveband, the lower left corner and the upper right corner passing through a G/W waveband, and the lower right corner passing through an R/W waveband. It should be understood that the method of the embodiments of the present application is not limited to a particular image sensor, i.e., the arrangement of the pixel array of the image sensor 30 does not affect the implementation of the present invention.
The phase detection pixels 41 are 2 × 2OCL, i.e., there are 4 phase detection pixels 41 under one lens, at positions where the B/W channel pixels of the upper right minimal repeating unit and the lower left minimal repeating unit are located. One shadow node 31 has 42 × 2 OCLs, i.e., 16 phase detection pixels 41 distributed thereon. It should be understood that the position of the phase detection pixel 41 may be arbitrarily set, and the present embodiment does not limit the specific position of the phase detection pixel 41.
All the shadow nodes 31 on the image sensor 30 are selected to obtain the shadow node array 51 shown in fig. 5, which has 166 shadow nodes 31 in the horizontal direction and 188 shadow nodes 31 in the vertical direction. Since there are 16 phase detection pixels 41 in each of the shaded nodes 31, a phase detection pixel array 52 having 2 phase detection pixels 41 in the lateral direction and 8 phase detection pixels 41 in the longitudinal direction can be obtained in each of the shaded nodes 31.
The above operation is repeated, and the phase detection pixels 41 in all the shaded nodes 31 are extracted, resulting in the left phase source data 61 and the right phase source data 62 as shown in fig. 6. The sizes of the data are respectively 166 data in the horizontal direction and 1504 data in the vertical direction.
The phase source data 61 and the right phase source data 62 are respectively subjected to vertical fusion calculation, that is, each two vertical phase detection pixels are subjected to fusion calculation as shown in fig. 8, so that left phase fusion data 91 and right phase fusion data 92 as shown in fig. 9 are obtained, and the sizes of the left phase fusion data and the right phase fusion data are 166 data in the horizontal direction and 752 data in the vertical direction.
The left phase fusion data 91 and the right phase fusion data 92 are subjected to phase disparity calculation to obtain a low resolution disparity map 151 shown in fig. 15, which has 166 data in the horizontal direction and 752 data in the vertical direction.
Next, a high-resolution guide map 203 is acquired. Since the number of pixels of the W/G channel in the image sensor 30 is relatively large, it is first considered that the high-resolution guide map 203 is constituted by the imaging pixels of the two channels. Since the amount of light entering the W channel is large, overexposure or calculation errors are likely to occur, the G channel pixels are selected for constructing the high-resolution guide map 203.
The G-channel pixels 171 in the nodes (including the blank node 32 and the shaded node 31) of the image sensor 30 are distributed as shown in fig. 17, and the G-channel pixels 171 of each node are extracted to obtain a G-channel pixel matrix 172 having a size of 4G-channel pixels in the lateral direction and 16G-channel pixels in the longitudinal direction.
The above operations are repeated to extract G-channel pixels 171 in all the nodes, resulting in a G-channel pixel array 181 as shown in fig. 18, which has a size of 2000G-channel pixels in the horizontal direction and 6000G-channel pixels in the vertical direction. And performing fusion calculation on the G-channel pixel array 181, namely performing fusion calculation on every two G-channel pixels in the longitudinal direction to obtain a G-channel fusion data array with 2000G-channel pixels in the transverse direction and 3000G-channel pixels in the longitudinal direction.
As shown in fig. 19, the G-channel fused data array is divided into a plurality of first sections 191 having 4 data in the horizontal direction and 16 data in the vertical direction. Therefore, the device has 2000 ÷ 4 ÷ 500 first intervals 191 in the transverse direction, 3000 ÷ 16 ÷ 187.5 in the longitudinal direction, that is, 187 first intervals in the longitudinal direction, and 8G-channel data remains. As shown, 4G-channel data 192 (data shown by hatching) are selected therefrom. And the 4G-channel data 192 are grouped into an array having 1G-channel data 192 in the horizontal direction and 4G-channel data 192 in the vertical direction. Therefore, the resulting high-resolution guidance diagram 193 has 500G-channel data in the horizontal direction and 187 × 4+ 4-752G-channel data in the vertical direction.
As shown in fig. 20, the low-resolution disparity map 151 and the high-resolution guide map 193 are jointly subjected to double-side sampling calculation to obtain the high-resolution disparity map 203, which has the same resolution as the high-resolution guide map 193.
As shown in fig. 22, the G-channel fusion data 182 is split into G-channel left split data 221 and G-channel right split data 222, each having a size of 1000 data in the horizontal direction and 3000 data in the vertical direction.
As shown in fig. 23, the G-channel left split data 221 is divided into a plurality of second sections 231, and the G-channel right split data is divided into a plurality of third sections 232. The second section 231 and the third section 232 each have a size of 2 data in the horizontal direction and 16 data in the vertical direction. Selecting 4G-channel left split data 233 from the second section 231, forming an array including 1 data horizontally and 4 data vertically, performing the above operation on all the second sections 231 to obtain G-channel left fused data 191, and performing the same processing on the third section 232 as the second section 231 to obtain G-channel right fused data 201.
The sizes of the G-channel left fusion data and the G-channel right fusion data are both horizontal 500 data and vertical 752 data. The calculation process is similar to the process of obtaining the high-resolution guide map 193 from the G-channel fusion data 182, and is not described herein again.
As shown in fig. 24, the data of columns 2 and 3, columns 5 and 6, and columns 8 and 9 of the G-channel left fused data 241 are sequentially inserted after columns 1, 2, and 3 of the left phase fused data 91, and so on, to finally obtain high-resolution left phase detection data 242 having 496 data in the horizontal direction and 752 data in the vertical direction. The last column of left phase source data 911 is not interpolated later, so the amount of horizontal data is 166 × 3-2 — 496, and in this embodiment, the resolution of the high resolution left phase detection data 242 is increased by nearly 3 times compared to the resolution of the left phase fusion data 91.
As shown in fig. 25, the method for acquiring the high-resolution right phase detection data 252 is as follows, taking the first row and the second column data 2521 as an example: in this embodiment, the phase pixel conversion coefficient H is determined to be 8 based on the operating mode of the image sensor 30 being H2V2 binning. The parallax value of the high-resolution right phase detection data 2521 at the corresponding position 2031 of the high-resolution parallax map 203 is D12And D is12Is positive. At this time, the data to be filled in the high resolution right phase detection data 2521 is shifted to the right by D from the first row and second column data 2511 of the G-channel right fusion data 25112Data corresponding to 8 units. Repeating the above operation on the data to be interpolated on each of the high resolution right phase detection data 252 to obtain the high resolution right phase detection data 252. The last column of data of the right phase fusion data 92 is not followed by interpolation, and therefore, the resulting high-resolution right phase detection data has 496 data in the horizontal direction and 500 data in the vertical direction. In this embodiment, the resolution of the high resolution right phase detection data 252 is expanded by approximately 3 times compared to the resolution of the right phase fused data 92.
By the calculation of the present embodiment, it is also possible to give phase difference information to the interpolation data of the high-resolution left phase detection data 242 and the high-resolution right phase detection data 252, and to realize focusing.
Focusing may be achieved based on the high resolution left phase detection data 242 and the high resolution right phase detection data 252.
The above is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (24)

1. A focusing method applied to an image sensor is characterized by comprising the following steps:
acquiring left phase source data and right phase source data;
obtaining a low-resolution disparity map according to the left phase source data and the right phase source data;
acquiring a high-resolution guide map;
acquiring a high-resolution disparity map according to the low-resolution disparity map and the high-resolution guide map, wherein the resolution of the high-resolution disparity map is higher than that of the low-resolution disparity map;
calculating high-resolution left phase detection data and high-resolution right phase detection data according to the left phase source data, the right phase source data and the high-resolution disparity map; and
focusing is achieved based on the high resolution left phase detection data and the high resolution right phase detection data.
2. The method of claim 1, wherein the image sensor comprises a pixel array divided into two or more nodes comprising a pixel count of E x F, the nodes comprising: the image sensor comprises shadow nodes and blank nodes, wherein a plurality of imaging pixels and X phase detection pixels are distributed in the shadow nodes, the blank nodes are all imaging pixels, and the X, E, F are all natural numbers which are more than or equal to 1.
3. The method of claim 2, wherein the obtaining left phase source data and right phase source data comprises:
obtaining the plurality of shadow nodes;
obtaining a shadow node array according to the shadow nodes;
obtaining a phase detection pixel array according to the shadow node array and the phase detection pixels; and
and obtaining the left phase source data and the right phase source data according to the phase detection pixel array.
4. The method of claim 2, wherein the obtaining a low resolution disparity map from left phase source data and right phase source data comprises:
performing fusion calculation on the left phase source data to obtain left phase fusion data, and performing fusion calculation on the right phase source data to obtain right phase fusion data;
and performing phase parallax calculation on the left phase fusion data and the right phase fusion data to obtain the low-resolution parallax map.
5. The method of claim 4, wherein the performing the phase disparity calculation on the left phase fusion data and the right phase fusion data to obtain the low resolution disparity map comprises:
determining a parallax search range coefficient S;
setting a filling coefficient to V based on the parallax search range coefficient S;
performing edge filling calculation on the left phase fusion data according to the filling coefficient V to obtain left phase edge filling data, and performing edge filling calculation on the right phase fusion data according to the filling coefficient V to obtain right phase edge filling data; and
traversing the left phase edge filling data and the right phase edge filling data, and acquiring phase disparity values corresponding to the left phase edge filling data and the right phase edge filling data to obtain the low-resolution disparity map.
6. The method according to claim 5, before said traversing the left phase edge padding data and the right phase edge padding data, obtaining phase disparity values corresponding to the left phase edge padding data and the right phase edge padding data, and obtaining the low resolution disparity map, further comprising:
selecting any data in the right phase edge filling data as a point to be calculated;
selecting a contrast point in the left phase edge filling data according to the point to be calculated;
extracting a right phase data vector from the right phase edge data based on the control points and the fill factor V;
extracting a left phase data vector based on the point to be calculated, the disparity search range coefficient S and the right phase data vector;
based on the parallax search range coefficient S, taking the right phase data vector as a template, and performing sliding window operation on the left phase data vector to obtain more than two SAD values; and
and obtaining minimum values of the more than two SAD values according to the more than two SAD values, wherein the minimum values are phase disparity values corresponding to the points to be calculated and the control points.
7. The method of claim 4, wherein the acquiring the high resolution guidance map comprises:
acquiring a first channel pixel of the image sensor, and acquiring a first channel fusion data array according to the first channel pixel, wherein the first channel pixel is a channel pixel with the largest number ratio in pixels of the image sensor except a W channel pixel;
and fusing a data array according to the first channel to obtain the high-resolution guide map.
8. The method of claim 7, wherein obtaining a first channel pixel of the image sensor from which a first channel fused data array is derived comprises:
stacking the first channel pixels to form a first channel pixel matrix in the node of the image sensor;
obtaining a first channel pixel array according to the first channel pixel array; and
and performing fusion calculation on every two longitudinal pixels in the first channel pixel array to obtain a first channel fusion data array.
9. The method of claim 7, wherein said fusing the data array from the first channel to obtain a high resolution guide map comprises:
dividing the first channel fusion data array into a plurality of first intervals;
extracting a plurality of first channel data from the first interval; and
obtaining the high-resolution guide map according to the plurality of first channel data;
the number is a natural number greater than or equal to 1.
10. The method according to claim 9, wherein the extracting of the first channel data from the first interval is: extracting the number of first channel data according to the position of the phase detection pixel in the shadow node, wherein the number of bits is a natural number which is greater than or equal to 1.
11. The method of claim 1, wherein the obtaining a high resolution disparity map from the low resolution disparity map and the high resolution guide map comprises: and performing combined double-side sampling calculation on the low-resolution disparity map and the high-resolution guide map to obtain a high-resolution disparity map, wherein the resolution of the high-resolution disparity map is consistent with that of the high-resolution guide map.
12. The method of claim 7, wherein the obtaining high resolution left phase detection data and high resolution right phase detection data from the left phase source data, the right phase source data, and the high resolution disparity map comprises:
acquiring first channel left fusion data and first channel right fusion data according to the first channel fusion data array;
acquiring the high-resolution left phase detection data according to the left phase fusion data and the first channel left fusion data;
and acquiring the high-resolution right phase detection data according to the right phase fusion data, the first channel right fusion data and the high-resolution disparity map.
13. The method of claim 12, wherein said obtaining first-channel left fused data and first-channel right fused data from the first-channel fused data array comprises:
acquiring first channel left split data and first channel right split data according to the first channel fusion data array;
and obtaining first channel left fusion data according to the first channel left split data, and obtaining first channel right fusion data according to the first channel right split data.
14. The method of claim 13, wherein obtaining first channel left fusion data from the first channel left split data and obtaining first channel right fusion data from the first channel right split data comprises:
dividing the first channel left split data into a plurality of second intervals, and dividing the first channel right split data into a plurality of third intervals;
selecting a plurality of first channel left split data from the second interval, and selecting a plurality of first channel right split data from the third interval; and
obtaining the first channel left fusion data according to the plurality of first channel left split data, and obtaining the first channel right fusion data according to the plurality of first channel right split data;
the number is a natural number greater than or equal to 1.
15. The method of claim 14, wherein the selecting the first channel left split data from the second interval and the first channel right split data from the third interval is:
selecting the left split data of the plurality of first channels in the second interval according to the position of the phase detection pixel in the shadow node;
selecting the plurality of first channel right split data in the third interval according to the position of the phase detection pixel in the shadow node;
the number is a natural number greater than or equal to 1.
16. The method of claim 12, wherein the obtaining the high resolution left phase detection data from the left phase fused data and the first channel left fused data is: and filling the left phase fusion data with the first channel left fusion data to obtain the high-resolution left phase detection data.
17. The method of claim 12, wherein the acquiring the high resolution right phase detection data from the right phase fusion data, the first channel right fusion data, and the high resolution disparity map comprises:
determining a phase pixel conversion coefficient;
acquiring a parallax value of the high-resolution parallax map; and
and calculating the high-resolution right phase detection data based on the parallax value, the phase pixel conversion coefficient, the right phase fusion data and the first channel right fusion data.
18. A focusing device, comprising:
the acquisition module is used for acquiring left phase source data and right phase source data, acquiring a low-resolution disparity map according to the left phase source data and the right phase source data, acquiring a high-resolution guide map, and acquiring a high-resolution disparity map according to the low-resolution disparity map and the high-resolution guide map, wherein the resolution of the high-resolution disparity map is higher than that of the low-resolution disparity map;
the calculation module is used for calculating and obtaining the high-resolution left phase detection data and the high-resolution right phase detection data according to the left phase source data, the right phase source data and the high-resolution disparity map; and
and the focusing module is used for realizing focusing based on the high-resolution left phase detection data and the high-resolution right phase detection data.
19. The focusing device of claim 18, wherein after acquiring the left phase source data and the right phase source data, the acquisition module is further specifically configured to:
acquiring left phase fusion data according to the left phase source data;
and acquiring right phase fusion data according to the right phase source data.
20. The focusing device of claim 19, wherein in terms of acquiring a high resolution guide map, the acquisition module is specifically configured to:
acquiring first channel pixels on an image sensor, and acquiring a first channel fusion data array according to the first channel pixels, wherein the first channel pixels are channel pixels with the largest number ratio in the pixels of the image sensor except for W channel pixels;
and acquiring the high-resolution guide map according to the first channel fusion data array.
21. The focusing device of claim 20, wherein after acquiring the first channel fused data array, the acquisition module is further configured to: and acquiring first channel left fusion data and first channel right fusion data based on the first channel fusion data array.
22. The focusing device of claim 21, wherein in the aspect of calculating the high resolution left phase detection data and the high resolution right phase detection data according to the left phase source data, the right phase source data and the high resolution disparity map, the calculating module is specifically configured to:
calculating to obtain the high-resolution left phase detection data according to the left phase fusion data and the first channel left fusion data;
and calculating to obtain the high-resolution right phase detection data according to the right phase fusion data, the first channel right fusion data and the high-resolution disparity map.
23. An electronic device comprising a processor, a memory, and a program stored on the memory and executable on the processor, the program, when executed by the processor, implementing the steps of the focusing method as claimed in any one of claims 1 to 17.
24. A readable storage medium, characterized in that the readable storage medium stores thereon a program that, when executed by a processor, implements the steps of the focusing method according to any one of claims 1 to 17.
CN202111107829.8A 2021-09-22 2021-09-22 Focusing method, focusing device, electronic equipment and storage medium Active CN113938604B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111107829.8A CN113938604B (en) 2021-09-22 2021-09-22 Focusing method, focusing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111107829.8A CN113938604B (en) 2021-09-22 2021-09-22 Focusing method, focusing device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113938604A true CN113938604A (en) 2022-01-14
CN113938604B CN113938604B (en) 2023-05-09

Family

ID=79276299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111107829.8A Active CN113938604B (en) 2021-09-22 2021-09-22 Focusing method, focusing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113938604B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012086120A1 (en) * 2010-12-24 2012-06-28 パナソニック株式会社 Image processing apparatus, image pickup apparatus, image processing method, and program
JP2014204299A (en) * 2013-04-05 2014-10-27 キヤノン株式会社 Imaging apparatus and method for controlling the same
WO2021093502A1 (en) * 2019-11-12 2021-05-20 Oppo广东移动通信有限公司 Phase difference obtaining method and apparatus, and electronic device
US20210211615A1 (en) * 2020-01-03 2021-07-08 Samsung Electronics Co., Ltd. Electronic device comprising image sensor and method of operation thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012086120A1 (en) * 2010-12-24 2012-06-28 パナソニック株式会社 Image processing apparatus, image pickup apparatus, image processing method, and program
JP2014204299A (en) * 2013-04-05 2014-10-27 キヤノン株式会社 Imaging apparatus and method for controlling the same
WO2021093502A1 (en) * 2019-11-12 2021-05-20 Oppo广东移动通信有限公司 Phase difference obtaining method and apparatus, and electronic device
US20210211615A1 (en) * 2020-01-03 2021-07-08 Samsung Electronics Co., Ltd. Electronic device comprising image sensor and method of operation thereof

Also Published As

Publication number Publication date
CN113938604B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
Jin et al. Light field spatial super-resolution via deep combinatorial geometry embedding and structural consistency regularization
US20230362344A1 (en) System and Methods for Calibration of an Array Camera
US9412151B2 (en) Image processing apparatus and image processing method
US8947578B2 (en) Apparatus and method of capturing image
CN110896467B (en) Method and apparatus for restoring image
CN105721768B (en) Method and apparatus for generating a suitable slice image from a focus stack
CN112750085A (en) Image restoration method and image restoration apparatus
US20140168371A1 (en) Image processing apparatus and image refocusing method
CN113538243B (en) Super-resolution image reconstruction method based on multi-parallax attention module combination
US10356381B2 (en) Image output apparatus, control method, image pickup apparatus, and storage medium
US11734877B2 (en) Method and device for restoring image obtained from array camera
CN110335228B (en) Method, device and system for determining image parallax
KR101140953B1 (en) Method and apparatus for correcting distorted image
CN111932594A (en) Billion pixel video alignment method and device based on optical flow and medium
CN113938604B (en) Focusing method, focusing device, electronic equipment and storage medium
CN112435168B (en) Reference block scaling method and computer readable storage medium
KR20160049371A (en) Image generating apparatus and method for generating image
JPWO2014077024A1 (en) Image processing apparatus, image processing method, and image processing program
RU2690757C1 (en) System for synthesis of intermediate types of light field and method of its operation
Shi et al. Learning based Deep Disentangling Light Field Reconstruction and Disparity Estimation Application
JP2012015982A (en) Method for deciding shift amount between videos
CN111951159B (en) Processing method for super-resolution of light field EPI image under strong noise condition
Wang et al. Flexible light field angular superresolution via a deep coarse-to-fine framework
CN113971629A (en) Image restoration method and device
CN114648472A (en) Training method of image fusion model, image generation method and device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant