CN112866545A - Focusing control method and device, electronic equipment and computer readable storage medium - Google Patents

Focusing control method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112866545A
CN112866545A CN201911101407.2A CN201911101407A CN112866545A CN 112866545 A CN112866545 A CN 112866545A CN 201911101407 A CN201911101407 A CN 201911101407A CN 112866545 A CN112866545 A CN 112866545A
Authority
CN
China
Prior art keywords
phase difference
sub
target
pixel
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911101407.2A
Other languages
Chinese (zh)
Other versions
CN112866545B (en
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911101407.2A priority Critical patent/CN112866545B/en
Publication of CN112866545A publication Critical patent/CN112866545A/en
Application granted granted Critical
Publication of CN112866545B publication Critical patent/CN112866545B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions

Abstract

The application relates to a focusing control method and device, an electronic device and a computer readable storage medium, comprising the following steps: and performing main body detection on the original image to obtain a target main body area, and calculating a phase difference in a first direction and a phase difference in a second direction according to the original image of the target main body area, wherein a preset included angle is formed between the first direction and the second direction. And determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and focusing the target main body region according to the target phase difference. Compared with the traditional method that only the phase difference in one direction can be calculated, the phase difference in two directions can obviously reflect more phase difference information. And finally, determining the target phase difference from the phase difference in the first direction and the phase difference in the second direction, wherein the target phase difference is higher in accuracy, so that the target main body area is focused according to the target phase difference, and the accuracy in the focusing process can be greatly improved.

Description

Focusing control method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a focus control method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of electronic device technology, more and more users shoot images through electronic devices. In order to ensure that a shot image is clear, a camera module of the electronic device generally needs to be focused, that is, a distance between a lens and an image sensor is adjusted so that a shot object is on a focal plane. The conventional focusing method includes Phase Detection Auto Focus (PDAF).
Traditional phase detection automatic focusing sets up the phase detection pixel in pairs in the pixel that image sensor includes, wherein, a phase detection pixel in every phase detection pixel pair carries out the left side and shelters from, and another phase detection pixel carries out the right side and shelters from, so, just can separate into two parts about the formation of image light beam of every phase detection pixel pair of directive, through the image that two parts formation of image light beam about the contrast, can obtain the phase difference, can focus according to this phase difference after obtaining the phase difference, wherein, the phase difference refers to the difference on the formation of image position of the formation of image light that incides from different directions.
However, the above-mentioned method of setting phase detection pixel points in the image sensor to perform focusing is not accurate.
Disclosure of Invention
The embodiment of the application provides a focusing control method and device, electronic equipment and a computer readable storage medium, which can improve the focusing accuracy in the photographing process.
A focusing control method is applied to electronic equipment and comprises the following steps:
performing main body detection on the original image to obtain a target main body area;
calculating a phase difference in a first direction and a phase difference in a second direction according to the original image of the target main body area, wherein a preset included angle is formed between the first direction and the second direction;
and determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and focusing the target main body area according to the target phase difference.
A focus control apparatus comprising:
the main body detection module is used for carrying out main body detection on the original image to obtain a target main body area;
the phase difference calculation module is used for calculating a phase difference in a first direction and a phase difference in a second direction according to the original image of the target main body area, and the first direction and the second direction form a preset included angle;
and the phase difference focusing module is used for determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction and focusing the target main body area according to the target phase difference.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the above method.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as above.
According to the focusing control method, the focusing control device, the electronic equipment and the computer readable storage medium, main body detection is carried out on the original image to obtain the target main body area, the phase difference in the first direction and the phase difference in the second direction are calculated according to the original image of the target main body area, and the first direction and the second direction form a preset included angle. And determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and focusing the target main body region according to the target phase difference. Firstly, a target subject region is extracted from an original image, and secondly, a phase difference in two directions, namely a first direction and a second direction, is calculated for the original image of the target subject region in a targeted manner. Compared with the traditional method that only the phase difference in one direction can be calculated, the phase difference in two directions can obviously reflect more phase difference information. And finally, determining the target phase difference from the phase difference in the first direction and the phase difference in the second direction, wherein the target phase difference is higher in accuracy, so that the target main body area is focused according to the target phase difference, and the accuracy in the focusing process can be greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of phase detection autofocus;
fig. 2 is a schematic diagram of arranging phase detection pixels in pairs among pixels included in an image sensor;
FIG. 3 is a schematic diagram showing a partial structure of an image sensor according to an embodiment;
FIG. 4 is a schematic diagram of a pixel site in one embodiment;
FIG. 5 is a schematic diagram showing an internal structure of an image sensor according to an embodiment;
FIG. 6 is a diagram illustrating an embodiment of a filter disposed on a pixel group;
FIG. 7 is a flow chart of a focus control method in one embodiment;
FIG. 8 is a flowchart of a method of calculating a phase difference between the first direction and the second direction of FIG. 7;
FIG. 9 is a flowchart of a method for calculating a phase difference between a first direction and a second direction according to the target luminance graph in FIG. 8;
FIG. 10 is a diagram illustrating a group of pixels in one embodiment;
FIG. 11 is a diagram of a sub-luminance graph in one embodiment;
FIG. 12 is a flowchart illustrating a method for focusing the target body region according to the target phase difference shown in FIG. 7;
FIG. 13 is a diagram illustrating an image processing effect according to an embodiment;
FIG. 14 is a schematic structural diagram of a focus control apparatus according to an embodiment;
fig. 15 is a schematic internal structure diagram of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first camera may be referred to as a second camera, and similarly, a second camera may be referred to as a first camera, without departing from the scope of the present application. The first camera and the second camera are both cameras, but they are not the same camera.
Fig. 1 is a schematic diagram of a Phase Detection Auto Focus (PDAF) principle. As shown in fig. 1, M1 is the position of the image sensor when the imaging device is in the in-focus state, where the in-focus state refers to a successfully focused state. When the image sensor is located at the position M1, the imaging light rays g reflected by the object W in different directions toward the Lens converge on the image sensor, that is, the imaging light rays g reflected by the object W in different directions toward the Lens are imaged at the same position on the image sensor, and at this time, the image sensor is imaged clearly.
M2 and M3 indicate positions where the image sensor may be located when the imaging device is not in focus, and as shown in fig. 1, when the image sensor is located at the M2 position or the M3 position, the imaging light rays g reflected by the object W in different directions toward the Lens will be imaged at different positions. Referring to fig. 1, when the image sensor is located at the position M2, the imaging light rays g reflected by the object W in different directions toward the Lens are imaged at the position a and the position B, respectively, and when the image sensor is located at the position M3, the imaging light rays g reflected by the object W in different directions toward the Lens are imaged at the position C and the position D, respectively, and at this time, the image sensor is not clear.
In the PDAF technique, the difference in the position of the image formed by the imaging light rays entering the lens from different directions in the image sensor can be obtained, for example, as shown in fig. 1, the difference between the position a and the position B, or the difference between the position C and the position D can be obtained; after acquiring the difference of the positions of images formed by imaging light rays entering the lens from different directions in the image sensor, obtaining the out-of-focus distance according to the difference and the geometric relationship between the lens and the image sensor in the camera, wherein the out-of-focus distance refers to the distance between the current position of the image sensor and the position where the image sensor is supposed to be in the in-focus state; the imaging device can focus according to the obtained defocus distance.
From this, it is understood that the calculated PD value is 0 at the time of focusing, whereas the larger the calculated value is, the farther the position of the clutch focus is indicated, and the smaller the value is, the closer the clutch focus is indicated. When PDAF focusing is adopted, the PD value is calculated, the corresponding relation between the PD value and the defocusing distance is obtained according to calibration, the defocusing distance can be obtained, and then the lens is controlled to move to reach the focusing point according to the defocusing distance, so that focusing is realized.
In the related art, some phase detection pixel points may be provided in pairs among the pixel points included in the image sensor, and as shown in fig. 2, a phase detection pixel point pair (hereinafter, referred to as a pixel point pair) a, a pixel point pair B, and a pixel point pair C may be provided in the image sensor. In each pixel point pair, one phase detection pixel point performs Left shielding (English), and the other phase detection pixel point performs Right shielding (English).
For the phase detection pixel point which is shielded on the left side, only the light beam on the right side in the imaging light beam which is emitted to the phase detection pixel point can image on the photosensitive part (namely, the part which is not shielded) of the phase detection pixel point, and for the phase detection pixel point which is shielded on the right side, only the light beam on the left side in the imaging light beam which is emitted to the phase detection pixel point can image on the photosensitive part (namely, the part which is not shielded) of the phase detection pixel point. Therefore, the imaging light beam can be divided into a left part and a right part, and the phase difference can be obtained by comparing images formed by the left part and the right part of the imaging light beam.
However, since the phase detection pixel points arranged in the image sensor are generally sparse, only a horizontal phase difference can be obtained through the phase detection pixel points, and a scene with horizontal textures cannot be calculated, and the calculated PD values are mixed up to obtain an incorrect result, for example, a scene is photographed as a horizontal line, two left and right images are obtained according to PD characteristics, but the PD values cannot be calculated.
In order to solve the problem that the phase detection autofocus cannot calculate a PD value for some horizontal texture scenes to achieve focusing, an embodiment of the present application provides an imaging component, which may be configured to detect and output a phase difference value in a first direction and a phase difference value in a second direction, and may implement focusing by using the phase difference value in the second direction for horizontal texture scenes.
In one embodiment, the present application provides an imaging assembly. The imaging assembly includes an image sensor. The image sensor may be a Metal Oxide Semiconductor (CMOS) image sensor, a Charge-coupled Device (CCD), a quantum thin film sensor, an organic sensor, or the like.
Fig. 3 is a schematic structural diagram of a part of an image sensor in one embodiment. The image sensor 300 includes a plurality of pixel groups Z arranged in an array, each pixel group Z includes a plurality of pixels D arranged in an array, and each pixel D corresponds to one photosensitive unit. Each pixel point D comprises a plurality of sub-pixel points D arranged in an array. That is, each photosensitive unit may be composed of a plurality of photosensitive elements arranged in an array. The photosensitive element is an element capable of converting an optical signal into an electrical signal. In one embodiment, the light sensing element may be a photodiode. In the embodiment of the present application, each pixel group Z includes 4 pixels D arranged in 2 × 2 arrays, and each pixel may include 4 sub-pixels D arranged in 2 × 2 arrays. Each pixel group forms 2 x 2PD, can directly receive optical signals, performs photoelectric conversion, and can simultaneously output left and right and up and down signals. Each color channel may consist of 4 sub-pixel points.
As shown in fig. 4, taking each pixel point including a sub-pixel point 1, a sub-pixel point 2, a sub-pixel point 3, and a sub-pixel point 4 as an example, the sub-pixel point 1 and the sub-pixel point 2 may be synthesized, the sub-pixel point 3 and the sub-pixel point 4 are synthesized to form a PD pixel pair in the up-down direction, and a horizontal edge is detected to obtain a phase difference value in the second direction, that is, a PD value in the vertical direction; the sub-pixel point 1 and the sub-pixel point 3 are synthesized, and the sub-pixel point 2 and the sub-pixel point 4 are synthesized to form a PD pixel pair in the left and right directions, so that the vertical edge can be detected, and the phase difference value in the first direction, namely the PD value in the horizontal direction, is obtained.
Fig. 5 is a schematic view of the internal structure of the image forming apparatus in one embodiment. As shown in fig. 5, the imaging device includes a lens 50, a filter 52, and an image sensor 54. The lens 50, the filter 52 and the image sensor 54 are sequentially located on the incident light path, i.e., the lens 50 is disposed on the filter 52 and the filter 52 is disposed on the image sensor 54.
The filter 52 may include three types of red, green and blue, which only transmit the light with the wavelengths corresponding to the red, green and blue colors, respectively. A filter 52 is disposed on one pixel.
The image sensor 54 may be the image sensor of fig. 3.
The lens 50 is used to receive incident light and transmit the incident light to the filter 52. The filter 52 smoothes incident light, and then makes the smoothed light incident on the light receiving unit of the image sensor 54 on a pixel basis.
The light-sensing unit in the image sensor 54 converts light incident from the optical filter 52 into a charge signal by the photoelectric effect, and generates a pixel signal in accordance with the charge signal. The charge signal corresponds to the received light intensity.
Fig. 6 is a schematic diagram illustrating a filter disposed on a pixel group according to an embodiment. The pixel point group Z comprises 4 pixel points D arranged in an array arrangement manner of two rows and two columns, wherein color channels of the pixel points in the first row and the first column are green, that is, the optical filters arranged on the pixel points in the first row and the first column are green optical filters; the color channel of the pixel points in the first row and the second column is red, that is, the optical filter arranged on the pixel points in the first row and the second column is a red optical filter; the color channel of the pixel points in the second row and the first column is blue, that is, the optical filter arranged on the pixel points in the second row and the first column is a blue optical filter; the color channel of the pixel points in the second row and the second column is green, that is, the optical filter arranged on the pixel points in the second row and the second column is a green optical filter.
FIG. 7 is a flowchart of a focusing method in one embodiment. The focus control method in this embodiment is described by taking the image sensor in fig. 5 as an example. As shown in fig. 7, the focus control method includes steps 720 to 760.
And 720, performing main body detection on the original image to obtain a target main body area.
The original image refers to an RGB image obtained by shooting a shooting scene by a camera module of the electronic device, and the display range of the original image is consistent with the range of image information that can be captured by the camera module. The electronic device performs subject detection on the original image, and the trained deep learning neural network model can be used for performing subject detection on the original image. Of course, in this embodiment, other methods may also be used to perform subject detection to obtain the target subject region. This is not limited in this application.
The target subject region is a region including the target subject in the original image. The target subject region may be an irregular region obtained along the edge of the target subject or a rectangular region including the target subject. This is not limited in this application.
Step 740, calculating a phase difference in a first direction and a phase difference in a second direction according to the original image of the target subject region, wherein a preset included angle is formed between the first direction and the second direction.
And after the main body detection is carried out on the original image to obtain the target main body area, extracting the target main body area from the original image. And carrying out main body segmentation on the original image to obtain a target main body area, wherein the target main body area comprises a target main body and a background area. For a more regular target subject, the background region included in the target subject region obtained by subject segmentation of the original image is less. For example, for a target subject such as a person, a cat, or a dog, a target subject region obtained by subject segmentation of an original image is mostly a rectangular region or a region obtained by segmentation along the edge of the target subject; in the case of an irregular target subject, since the edge of the target subject is irregular, a background region is generally included in a target subject region obtained by subject segmentation of an original image. For example, in an irregular target subject such as a spider, since it is difficult to divide the edge of the target subject, a target subject region obtained by subject dividing an original image is often rectangular, and in this case, a background region included in the rectangular target subject region is large. When the target subject in the target subject region is focused, the background region may affect the focusing accuracy to a certain extent when the background region is large. Image preprocessing may be performed on a background region in the subject region to reduce interference of the background region with the target subject. The image preprocessing can be to erase the background area and reduce the definition, so that the target main body in the target main body area is more prominent and clearer, and the target main body can be focused more accurately when the target main body area is focused later.
And calculating a phase difference in a first direction and a phase difference in a second direction of the original image corresponding to the target main body region after image preprocessing, wherein a preset included angle is formed between the first direction and the second direction. For example, the first direction and the second direction are perpendicular to each other. Of course, in other embodiments, only two different orientations are required, and the two different orientations need not be arranged in a perpendicular relationship. Specifically, the first direction and the second direction may be determined according to texture features in the original image. When the determined first direction is a horizontal direction in the original image, then the second direction may be a vertical direction in the original image. The phase difference in the horizontal direction can reflect the horizontal texture features in the target subject region, and the phase difference in the vertical direction can reflect the vertical texture features in the target subject region, so that the body of the horizontal texture and the body of the vertical texture in the target subject region can be considered.
Specifically, a luminance map of the target subject region is generated from an original image (RGB map) of the target subject region. Then, a phase difference in a first direction and a phase difference in a second direction, which are perpendicular to each other, are calculated from the luminance map of the target subject region. Generally, a frequency domain algorithm and a space domain algorithm can be used for calculating the phase difference value, but other methods can be used for calculating the phase difference value. The frequency domain algorithm is to calculate by using the characteristic of Fourier displacement, convert the acquired target brightness image from the space domain to the frequency domain by using Fourier transformation, then calculate phase compensation, when the compensation calculates the maximum value (peak), it shows that there is the maximum displacement, at this time, it can know how much the maximum displacement is in the space domain by doing inverse Fourier. The spatial domain algorithm is to find out feature points, such as edge features, dog (difference of gaussian), Harris corner points, and the like, and then calculate the displacement by using the feature points.
And 760, determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and focusing the target main body area according to the target phase difference.
After the phase difference in the first direction and the phase difference in the second direction are calculated, the phase difference with the highest accuracy is extracted as the target phase difference from the phase difference in the first direction and the phase difference in the second direction. And calculating out-of-focus distance according to the target phase difference, and further controlling the lens to move according to the out-of-focus distance so as to focus the target main body area.
In the process of controlling the lens to focus to the target main body area, other automatic focusing modes can be adopted to match the phase difference focusing PDAF. For example, the hybrid focusing with the phase focusing method may be performed by using one or more of Time of flight Auto Focus (TOFAF), Contrast Auto Focus (CAF), laser focusing, and the like. Specifically, TOF may be used for rough focusing, and then PDAF may be used for precise focusing; or the PDAF is adopted for rough focusing firstly, and then the CAF is adopted for precise focusing and the like. The automatic focusing mode of PDAF for rough focusing and CAF for precise focusing can combine the speed of phase automatic focusing and the precision of contrast automatic focusing. First, the phase autofocus adjusts the distance setting quickly, at which time the object being photographed is already clear. Then fine adjustment is carried out by contrast focusing, and due to the fact that the focusing position is adjusted in advance, the maximum contrast can be determined in less time, and therefore focusing can be carried out more accurately.
In the embodiment of the application, firstly, a target subject region is extracted from an original image, and secondly, a phase difference in two directions, namely a first direction and a second direction, is calculated for the original image of the target subject region in a targeted manner. Compared with the traditional method that only the phase difference in one direction can be calculated, the phase difference in two directions can obviously reflect more phase difference information. And finally, determining the target phase difference from the phase difference in the first direction and the phase difference in the second direction, wherein the target phase difference is higher in accuracy, so that the target main body area is focused according to the target phase difference, and the accuracy in the focusing process can be greatly improved.
In one embodiment, an electronic device includes an image sensor including a plurality of pixel groups arranged in an array, each pixel group including M × N pixels arranged in an array; each pixel point corresponds to one photosensitive unit; each pixel point comprises a plurality of sub pixel points arranged in an array, wherein M and N are both natural numbers which are more than or equal to 2;
as shown in fig. 8, step 740, calculating a phase difference in a first direction and a phase difference in a second direction from the original image of the target subject region, includes:
step 742, for each pixel point group in the original image of the target subject area, obtaining a sub-luminance map corresponding to the pixel point group according to the luminance value of the sub-pixel point at the same position of each pixel point in the pixel point group.
In general, the luminance value of a pixel of an image sensor may be represented by the luminance value of a sub-pixel included in the pixel. The imaging device can obtain the sub-brightness map corresponding to the pixel point group according to the brightness value of the sub-pixel point at the same position of each pixel point in the pixel point group. The brightness value of the sub-pixel point refers to the brightness value of the optical signal received by the photosensitive element corresponding to the sub-pixel point.
As described above, the sub-pixel included in the image sensor is a photosensitive element capable of converting an optical signal into an electrical signal, so that the intensity of the optical signal received by the sub-pixel can be obtained according to the electrical signal output by the sub-pixel, and the luminance value of the sub-pixel can be obtained according to the intensity of the optical signal received by the sub-pixel.
And 744, generating a target brightness map according to the sub-brightness map corresponding to each pixel point group.
The imaging device can splice the sub-luminance graphs corresponding to the pixel groups according to the array arrangement mode of the pixel groups in the image sensor to obtain a target luminance graph.
Step 746, calculating the phase difference in the first direction and the phase difference in the second direction according to the target luminance map.
And performing segmentation processing on the target brightness image to obtain a first segmentation brightness image and a second segmentation brightness image, and determining the phase difference value of the matched pixels according to the position difference of the matched pixels in the first segmentation brightness image and the second segmentation brightness image. And determining the phase difference value of the first direction and the phase difference value of the second direction according to the phase difference values of the matched pixels.
In this embodiment of the application, because the image sensor in the electronic device includes a plurality of pixel groups that the array was arranged, every pixel group includes a plurality of pixels that the array was arranged. Each pixel point corresponds to one photosensitive unit and comprises a plurality of sub-pixel points arranged in an array, and each sub-pixel point corresponds to one photodiode. Therefore, a brightness value, namely the brightness value of the sub-pixel point, can be obtained through each photodiode. And acquiring a sub-brightness map corresponding to the pixel group according to the brightness value of the sub-pixel, and splicing the sub-brightness maps corresponding to the pixel groups according to the array arrangement mode of the pixel groups in the image sensor to obtain a target brightness map. Finally, the phase difference in the first direction and the phase difference in the second direction can be calculated according to the target brightness map.
And then, the target phase difference is determined from the phase difference in the first direction and the phase difference in the second direction, so that the accuracy of the target phase difference is higher, the target main body area is focused according to the target phase difference, and the accuracy in the focusing process can be greatly improved.
In one embodiment, as shown in fig. 9, step 746, calculating the phase difference in the first direction and the phase difference in the second direction according to the target luminance map includes: step 746a and step 746 b.
Step 746a, performing segmentation processing on the target luminance graph to obtain a first segmentation luminance graph and a second segmentation luminance graph, and determining a phase difference value of pixels matched with each other according to a position difference of the pixels matched with each other in the first segmentation luminance graph and the second segmentation luminance graph.
In a manner of performing the segmentation processing on the target luminance map, the imaging device may perform the segmentation processing on the target luminance map in a column direction (y-axis direction in an image coordinate system), and each segmentation line of the segmentation processing is perpendicular to the column direction in the process of performing the segmentation processing on the target luminance map in the column direction.
In another way of performing the segmentation processing on the target luminance map, the imaging device may perform the segmentation processing on the target luminance map along the row direction (the x-axis direction in the image coordinate system), and each segmentation line of the segmentation processing is perpendicular to the row direction during the segmentation processing on the target luminance map along the row direction.
The first and second sliced luminance graphs obtained by slicing the target luminance graph in the column direction may be referred to as upper and lower graphs, respectively. The first and second sliced luminance maps obtained by slicing the target luminance map in the row direction may be referred to as a left map and a right map, respectively.
Here, "pixels matched with each other" means that pixel matrices composed of the pixels themselves and their surrounding pixels are similar to each other. For example, pixel a and its surrounding pixels in the first tangential luminance map form a pixel matrix with 3 rows and 3 columns, and the pixel values of the pixel matrix are:
2 15 70
1 35 60
0 100 1
the pixel b and its surrounding pixels in the second sliced luminance graph also form a pixel matrix with 3 rows and 3 columns, and the pixel values of the pixel matrix are:
1 15 70
1 36 60
0 100 2
as can be seen from the above, the two matrices are similar, and pixel a and pixel b can be considered to match each other. The pixel matrixes are judged to be similar in many ways, usually, the pixel values of each corresponding pixel in two pixel matrixes are subtracted, the absolute values of the obtained difference values are added, and the result of the addition is used for judging whether the pixel matrixes are similar, that is, if the result of the addition is smaller than a preset threshold, the pixel matrixes are considered to be similar, otherwise, the pixel matrixes are considered to be dissimilar.
For example, for the two pixel matrices of 3 rows and 3 columns, 1 and 2 are subtracted, 15 and 15 are subtracted, 70 and 70 are subtracted, … … are added, and the absolute values of the obtained differences are added to obtain an addition result of 3, and if the addition result of 3 is smaller than a preset threshold, the two pixel matrices of 3 rows and 3 columns are considered to be similar.
Another way to judge whether the pixel matrixes are similar is to extract the edge features of the pixel matrixes by using a sobel convolution kernel calculation way or a high laplacian calculation way, and the like, and judge whether the pixel matrixes are similar through the edge features.
Here, the "positional difference of mutually matched pixels" refers to a difference in the position of a pixel located in the first sliced luminance graph and the position of a pixel located in the second sliced luminance graph among mutually matched pixels. As exemplified above, the positional difference of the pixel a and the pixel b that match each other refers to the difference in the position of the pixel a in the first sliced luminance graph and the position of the pixel b in the second sliced luminance graph.
The pixels matched with each other respectively correspond to different images formed in the image sensor by imaging light rays entering the lens from different directions. For example, a pixel a in the first sliced luminance graph and a pixel B in the second sliced luminance graph match each other, where the pixel a may correspond to the image formed at the a position in fig. 1 and the pixel B may correspond to the image formed at the B position in fig. 1.
Since the matched pixels respectively correspond to different images formed by imaging light rays entering the lens from different directions in the image sensor, the phase difference of the matched pixels can be determined according to the position difference of the matched pixels.
Step 746b, determining a phase difference value in the first direction or a phase difference value in the second direction according to the phase difference values of the pixels matched with each other.
When the first sliced luminance graph includes pixels in even-numbered rows and the second sliced luminance graph includes pixels in odd-numbered rows, and the pixel a in the first sliced luminance graph and the pixel b in the second sliced luminance graph are matched with each other, the phase difference value in the first direction can be determined according to the phase difference between the pixel a and the pixel b which are matched with each other.
When the first sliced luminance graph includes pixels in even columns and the second sliced luminance graph includes pixels in odd columns, and the pixel a in the first sliced luminance graph and the pixel b in the second sliced luminance graph are matched with each other, the phase difference value in the second direction can be determined according to the phase difference between the pixel a and the pixel b which are matched with each other.
In the embodiment of the application, the sub-luminance graphs corresponding to the pixel point groups are spliced to obtain the target luminance graph, the target luminance graph is divided into two segmentation luminance graphs, the phase difference values of the pixels matched with each other can be rapidly determined through pixel matching, meanwhile, rich phase difference values are contained, the phase difference value is improved, the accuracy and the stability of focusing are improved.
Fig. 10 is a schematic diagram of a pixel point group in an embodiment, as shown in fig. 10, the pixel point group includes 4 pixel points arranged in an array arrangement manner of two rows and two columns, where the 4 pixel points are a D1 pixel point, a D2 pixel point, a D3 pixel point, and a D4 pixel point, where each pixel point includes 4 sub pixel points arranged in an array arrangement manner of two rows and two columns, where the sub pixel points are D11, D12, D13, D14, D21, D22, D23, D24, D31, D32, D33, D34, D41, D42, D43, and D44, respectively.
As shown in fig. 10, the arrangement positions of the sub-pixel points d11, d21, d31 and d41 in each pixel point are the same and are all first rows and first columns, the arrangement positions of the sub-pixel points d12, d22, d32 and d42 in each pixel point are the same and are all first rows and second columns, the arrangement positions of the sub-pixel points d13, d23, d33 and d43 in each pixel point are the same and are all second rows and first columns, and the arrangement positions of the sub-pixel points d14, d24, d34 and d44 in each pixel point are the same and are all second rows and second columns.
In one embodiment, the step 742 obtains a sub-luminance map corresponding to the pixel group according to the luminance value of the sub-pixel at the same position of each pixel in the pixel group, which may include steps a1 to A3.
Step a1, the imaging device determines sub-pixel points at the same position from each pixel point, and obtains a plurality of sub-pixel point sets. And the positions of the sub-pixel points included in each sub-pixel point set in the pixel points are the same.
The imaging device determines sub-pixel points at the same position from the D1 pixel point, the D2 pixel point, the D3 pixel point and the D4 pixel point respectively to obtain 4 sub-pixel point sets J1, J2, J3 and J4, wherein the sub-pixel set J1 comprises sub-pixels d11, d21, d31 and d41, the positions of the sub-pixel points included in the pixel points are the same, the sub-pixel point set J2 comprises sub-pixel points d12, d22, d32 and d42, the positions of the sub-pixel points included in the pixel points are the same, the sub-pixel point set J3 comprises sub-pixel points d13, d23, d33 and d43, the positions of the included sub-pixel points in the pixel points are the same and are in a second row and a first column, the sub-pixel point set J4 comprises sub-pixel points d14, d24, d34 and d44, and the positions of the included sub-pixel points in the pixel points are the same and are in a second row and a second column.
Step A2, for each sub-pixel point set, the imaging device obtains the brightness value corresponding to the sub-pixel point set according to the brightness value of each sub-pixel point in the sub-pixel point set.
Optionally, in step a2, the imaging device may determine a color coefficient corresponding to each sub-pixel point in the sub-pixel point set, where the color coefficient is determined according to a color channel corresponding to the sub-pixel point.
For example, the sub-pixel D11 belongs to the D1 pixel, the filter included in the D1 pixel may be a green filter, that is, the color channel of the D1 pixel is green, the color channel of the included sub-pixel D11 is also green, and the imaging device may determine the color coefficient corresponding to the sub-pixel D11 according to the color channel (green) of the sub-pixel D11.
After determining the color coefficient corresponding to each sub-pixel point in the sub-pixel point set, the imaging device may multiply the color coefficient corresponding to each sub-pixel point in the sub-pixel point set with the luminance value to obtain a weighted luminance value of each sub-pixel point in the sub-pixel point set.
For example, the imaging device may multiply the luminance value of the sub-pixel d11 with the color coefficient corresponding to the sub-pixel d11 to obtain a weighted luminance value of the sub-pixel d 11.
After the weighted brightness value of each sub-pixel in the sub-pixel set is obtained, the imaging device may add the weighted brightness values of each sub-pixel in the sub-pixel set to obtain a brightness value corresponding to the sub-pixel set.
For example, for the sub-pixel point set J1, the brightness value corresponding to the sub-pixel point set J1 can be calculated based on the following first formula.
Y_TL=Y_21*C_R+(Y_11+Y_41)*C_G/2+Y_31*C_B。
Y _ TL is a luminance value corresponding to the sub-pixel set J1, Y _21 is a luminance value of the sub-pixel d21, Y _11 is a luminance value of the sub-pixel d11, Y _41 is a luminance value of the sub-pixel d41, Y _31 is a luminance value of the sub-pixel d31, C _ R is a color coefficient corresponding to the sub-pixel d21, C _ G/2 is a color coefficient corresponding to the sub-pixels d11 and d41, C _ B is a color coefficient corresponding to the sub-pixel d31, Y _21 × C _ R is a weighted luminance value of the sub-pixel d21, Y _11 × C _ G/2 is a weighted luminance value of the sub-pixel d11, Y _41 × C _ G/2 is a weighted luminance value of the sub-pixel d41, and Y _31 × C _ B is a weighted luminance value of the sub-pixel d 31.
For the sub-pixel point set J2, the brightness value corresponding to the sub-pixel point set J2 can be calculated based on the following second formula.
Y_TR=Y_22*C_R+(Y_12+Y_42)*C_G/2+Y_32*C_B。
Y _ TR is a brightness value corresponding to the sub-pixel set J2, Y _22 is a brightness value of the sub-pixel d22, Y _12 is a brightness value of the sub-pixel d12, Y _42 is a brightness value of the sub-pixel d42, Y _32 is a brightness value of the sub-pixel d32, C _ R is a color coefficient corresponding to the sub-pixel d22, C _ G/2 is a color coefficient corresponding to the sub-pixels d12 and d42, C _ B is a color coefficient corresponding to the sub-pixel d32, Y _22 × C _ R is a weighted brightness value of the sub-pixel d22, Y _12 × C _ G/2 is a weighted brightness value of the sub-pixel d12, Y _42 × C _ G/2 is a weighted brightness value of the sub-pixel d42, and Y _32 × C _ B is a weighted brightness value of the sub-pixel d 32.
For the sub-pixel point set J3, the brightness value corresponding to the sub-pixel point set J3 can be calculated based on the following third formula.
Y_BL=Y_23*C_R+(Y_13+Y_43)*C_G/2+Y_33*C_B。
Y _ BL is a brightness value corresponding to the sub-pixel set J3, Y _23 is a brightness value of the sub-pixel d23, Y _13 is a brightness value of the sub-pixel d13, Y _43 is a brightness value of the sub-pixel d43, Y _33 is a brightness value of the sub-pixel d33, C _ R is a color coefficient corresponding to the sub-pixel d23, C _ G/2 is a color coefficient corresponding to the sub-pixels d13 and d43, C _ B is a color coefficient corresponding to the sub-pixel d33, Y _23 × C _ R is a weighted brightness value of the sub-pixel d23, Y _13 × C _ G/2 is a weighted brightness value of the sub-pixel d13, Y _43 × C _ G/2 is a weighted brightness value of the sub-pixel d43, and Y _33 × C _ B is a weighted brightness value of the sub-pixel d 33.
For the sub-pixel point set J4, the brightness value corresponding to the sub-pixel point set J4 can be calculated based on the following fourth formula.
Y_BR=Y_24*C_R+(Y_14+Y_44)*C_G/2+Y_34*C_B。
Y _ BR is a brightness value corresponding to the sub-pixel set J4, Y _24 is a brightness value of the sub-pixel d24, Y _14 is a brightness value of the sub-pixel d14, Y _44 is a brightness value of the sub-pixel d44, Y _34 is a brightness value of the sub-pixel d34, C _ R is a color coefficient corresponding to the sub-pixel d24, C _ G/2 is a color coefficient corresponding to the sub-pixels d14 and d44, C _ B is a color coefficient corresponding to the sub-pixel d34, Y _24 × C _ R is a weighted brightness value of the sub-pixel d24, Y _14 × C _ G/2 is a weighted brightness value of the sub-pixel d14, Y _44 × C _ G/2 is a weighted brightness value of the sub-pixel d44, and Y _34 × C _ B is a weighted brightness value of the sub-pixel d 34.
Step a3, the imaging device generates a sub-luminance map according to the luminance value corresponding to each sub-pixel set.
The sub-luminance map comprises a plurality of pixels, each pixel in the sub-luminance map corresponds to one sub-pixel set, and the pixel value of each pixel is equal to the luminance value corresponding to the corresponding sub-pixel set.
FIG. 11 is a diagram of a sub-luminance graph in one embodiment. As shown in fig. 11, the sub-luminance map includes 4 pixels, wherein the pixels in the first row and the first column correspond to the sub-pixel set J1 and have the pixel value Y _ TL, the pixels in the first row and the second column correspond to the sub-pixel set J2 and have the pixel value Y _ TR, the pixels in the second row and the first column correspond to the sub-pixel set J3 and have the pixel value Y _ BL, and the pixels in the second row and the second column correspond to the sub-pixel set J4 and have the pixel value Y _ BR.
In one embodiment, as shown in fig. 12, step 760, determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and focusing the target subject area according to the target phase difference includes:
step 762, obtaining a first confidence coefficient of the phase difference in the first direction and a second confidence coefficient of the phase difference in the second direction;
step 764, selecting a larger phase difference of the first confidence coefficient and the second confidence coefficient as a target phase difference, and determining a corresponding defocus distance from a corresponding relationship between the phase difference and the defocus distance according to the target phase difference;
step 766, controlling the lens to move according to the defocusing distance so as to focus.
Specifically, when the confidence of the phase difference value in the first direction is greater than the confidence of the phase difference value in the second direction, the phase difference value in the first direction is selected, a corresponding defocus distance value is obtained according to the phase difference value in the first direction, and the moving direction is determined to be the horizontal direction.
And when the confidence coefficient of the phase difference value in the first direction is smaller than that of the phase difference value in the second direction, selecting the phase difference value in the second direction, obtaining a corresponding defocus distance value according to the phase difference value in the second direction, and determining the moving direction as the vertical direction.
When the confidence of the phase difference value in the first direction is equal to the confidence of the phase difference value in the second direction, the defocus distance value in the horizontal direction can be determined according to the phase difference value in the first direction, and the defocus distance value in the vertical direction can be determined according to the phase difference value in the second direction, and the defocus distance value in the horizontal direction is moved first and then the defocus distance value in the vertical direction is moved, or the defocus distance value in the vertical direction is moved first and then the defocus distance value in the horizontal direction is moved.
For a scene with horizontal texture, because the PD pixel pair in the horizontal direction can not obtain the phase difference value in the first direction, the phase difference value in the second direction in the vertical direction can be compared with the PD pixel pair in the vertical direction, the defocusing distance value is calculated according to the phase difference value in the second direction, and then the lens is controlled to move according to the defocusing distance value in the vertical direction to realize focusing.
For a scene with vertical texture, because the phase difference value in the second direction cannot be obtained by the PD pixel pair in the vertical direction, the phase difference value in the first direction in the horizontal direction can be compared with the phase difference value in the horizontal direction, the defocusing distance value is calculated according to the phase difference value in the first direction, and then the lens is controlled to move according to the defocusing distance value in the horizontal direction to realize focusing.
According to the focusing control method, the phase difference value in the first direction and the phase difference value in the second direction are obtained, the defocusing distance value and the moving direction are determined according to the phase difference value in the first direction and the phase difference value in the second direction, and the lens is controlled to move according to the defocusing distance value and the moving direction, so that automatic focusing of phase detection is realized.
In one embodiment, subject detection is performed on the original image to obtain a target subject region. The trained deep learning neural network model can be used for carrying out subject detection on the original image. The main body detection process specifically comprises the following steps:
first, a visible light map is acquired.
The subject detection (subject detection) is to automatically process the region of interest and selectively ignore the region of no interest when facing a scene. The region of interest is referred to as the subject region. The visible light pattern is an RGB (Red, Green, Blue) image. A color camera can be used for shooting any scene to obtain a color image, namely an RGB image. The visible light map may be stored locally by the electronic device, may be stored by other devices, may be stored from a network, and may also be captured in real time by the electronic device, without being limited thereto. Specifically, an ISP processor or a central processing unit of the electronic device may obtain a visible light map from a local or other device or a network, or obtain a visible light map by shooting a scene through a camera.
And secondly, generating a central weight map corresponding to the visible light map, wherein the weight value represented by the central weight map is gradually reduced from the center to the edge.
The central weight map is a map used for recording the weight value of each pixel point in the visible light map. The weight values recorded in the central weight map gradually decrease from the center to the four sides, i.e., the central weight is the largest, and the weight values gradually decrease toward the four sides. And the weight value from the image center pixel point to the image edge pixel point of the visible light image is characterized by the center weight chart to be gradually reduced.
The ISP processor or central processor may generate a corresponding central weight map according to the size of the visible light map. The weight value represented by the central weight map gradually decreases from the center to the four sides. The central weight map may be generated using a gaussian function, or using a first order equation, or a second order equation. The gaussian function may be a two-dimensional gaussian function.
And thirdly, inputting the visible light image and the central weight image into a main body detection model to obtain a main body region confidence image, wherein the main body detection model is obtained by training in advance according to the visible light image, the depth image, the central weight image and the corresponding marked main body mask image of the same scene.
The subject detection model is obtained by acquiring a large amount of training data in advance and inputting the training data into the subject detection model containing the initial network weight for training. Each set of training data comprises a visible light graph, a center weight graph and a labeled main body mask graph corresponding to the same scene. The visible light map and the central weight map are used as input of a trained subject detection model, and the labeled subject mask (mask) map is used as an expected output real value (ground true) of the trained subject detection model. The main body mask image is an image filter template used for identifying a main body in an image, and can shield other parts of the image and screen out the main body in the image. The subject detection model may be trained to recognize and detect various subjects, such as people, flowers, cats, dogs, backgrounds, etc.
Specifically, the ISP processor or central processor may input the visible light map and the central weight map into the subject detection model, and perform detection to obtain a subject region confidence map. The subject region confidence map is used to record the probability of which recognizable subject the subject belongs to, for example, the probability of a certain pixel point belonging to a person is 0.8, the probability of a flower is 0.1, and the probability of a background is 0.1.
And fourthly, determining a target subject in the visible light image according to the subject region confidence map.
The subject refers to various subjects, such as human, flower, cat, dog, cow, blue sky, white cloud, background, etc. The target subject refers to a desired subject, and can be selected as desired. Specifically, the ISP processor or the central processing unit may select the highest or the highest confidence level as the subject in the visible light image according to the subject region confidence map, and if there is one subject, the subject is used as the target subject; if multiple subjects exist, one or more of the subjects can be selected as target subjects as desired.
FIG. 13 is a diagram illustrating an image processing effect according to an embodiment. As shown in fig. 13, a butterfly exists in the RGB map 1302, the RGB map is input to a subject detection model to obtain a subject region confidence map 1304, then the subject region confidence map 1304 is filtered and binarized to obtain a binarized mask map 1306, and then the binarized mask map 1306 is subjected to morphological processing and guided filtering to realize edge enhancement, so as to obtain a subject mask map 1308.
In the embodiment of the application, a visible light map is obtained, and a central weight map corresponding to the visible light map is generated, wherein a weight value represented by the central weight map is gradually reduced from the center to the edge. And inputting the visible light image and the central weight image into a main body detection model to obtain a main body region confidence image, wherein the main body detection model is obtained by training in advance according to the visible light image, the depth image, the central weight image and the corresponding marked main body mask image of the same scene. And determining the target subject in the visible light map according to the subject region confidence map. Therefore, the main body detection is carried out on the original image, and the target main body area is accurately obtained.
In one embodiment, the subject detection on the original image to obtain the target subject region includes:
and carrying out main body detection on the original image to obtain at least two main body areas.
After an original image obtained by shooting of the electronic equipment is obtained, main body detection is carried out on the original image by adopting a main body detection network model, and at least two main body detection results are obtained. If the original image contains several subjects, several subject detection results will be generated correspondingly after subject detection. A subject detection result may refer to a detection frame in which the original image includes all regions of a subject, for example, a rectangular detection frame including the whole body of a dog, but the detection frame may also be in other planar patterns such as a circle, an ellipse, and a trapezoid. The subject detection result may be a subject detection result of a dog other than a region occupied by the whole body of the dog, for example, a region occupied by the whole body of the dog in the original image.
Of course, if another dog exists in the original image, the subject detection result corresponding to the other dog may be a rectangular detection frame including the whole body of the other dog, and of course, the detection frame may also be other planar figures such as a circle, an ellipse, and a trapezoid. The subject detection result may be a subject detection result of the other dog except for a region occupied by the whole body of the other dog, for example, a region occupied by the whole body of the other dog in the original image. Namely, each subject in the original image is subjected to individual subject detection to obtain a corresponding subject detection result. So that the target main body can be screened out by applying a uniform size range.
Acquiring shooting data of a shot original image and size information of an image display interface of the electronic equipment, and determining the size range of a target area in the original image according to the shooting data of the original image and the size information of the image display interface of the electronic equipment.
Since both the shooting data of the shot original image and the size information of the image display interface of the electronic device affect the size range of the target subject in the original image, the size range of the target subject in the original image is determined according to the two parameter information. The shooting data for shooting the original image may include parameters such as a focal length and an aperture size when the original image is shot. The size information of the image display interface of the electronic equipment comprises size information of a display screen of the electronic equipment and scale information (such as 1: 1 or 4: 3) of a display screen.
For example, when the lens used for capturing the original image is a telephoto lens, the display scale of the original image is small, and the size range of the target subject in the original image is correspondingly small. When the lens adopted in the process of shooting the original image is the short-focus lens, the display proportion of the original image is larger, and the size range of the target main body in the original image is correspondingly larger. The original images taken by the same camera at different zoom factors and presented on the CCD are all the same.
When the lens used for shooting the original image is a short-focus lens and the zoom is switched from 1-time zoom to 2-time zoom, the range of the preview image captured from the original image is reduced, and the preview image is enlarged to be suitable for the size of an image display interface of the electronic equipment for display. Therefore, the size range of the subject in the original image at the time of 2 times zooming becomes correspondingly smaller.
When the size information of the display screen of the electronic device is smaller, the size range of the target subject in the original image is correspondingly smaller, and the size range of the target subject in the original image is also related to the scale information of the display screen of the electronic device.
A subject region corresponding to the size range is selected from the plurality of subject regions, and the subject region corresponding to the size range is set as a target subject region.
After an original image obtained by shooting of the electronic equipment is obtained, main body detection is carried out on the original image by adopting a main body detection network model, and at least two main body detection results are obtained. And determining the size range of the target subject in the original image according to the shooting data of the original image and the size information of the image display interface of the electronic equipment. The subject detection result conforming to the size range can be screened out from the plurality of subject detection results, and the subject detection result conforming to the size range is taken as the target subject.
In the image processing method in the embodiment of the present application, in the electronic device, a display area of an original image captured by the electronic device is generally larger than a display area of a preview image displayed on an image display interface. In order to accurately display a subject in an original image in a preview image, therefore, first subject detection is performed on the original image; second, the size range of the target subject in the original image is determined. And finally, screening out a main body detection result with the size range conforming to the size range of the target main body from at least two main body detection results of the original image. The subject detection result is displayed as a target subject of the preview image. According to the size range of the target subject in the original image, the target subject can be accurately screened out from the subject detection result so as to be displayed in the preview image. Since both the shooting data of the shot original image and the size information of the image display interface of the electronic device affect the size range of the target subject in the original image, the size range of the target subject in the original image is determined according to the two parameter information.
In one embodiment, the target subject region includes a face region.
Specifically, when a person is photographed, the target main body area is a face area under most conditions, that is, the face area needs to be focused accurately, so that the definition of the face area is high. The face region has a smaller area than the whole screen, so that relatively less phase difference information can be acquired. Therefore, focusing is performed by using the electronic device in the above embodiment, and the phase difference in the first direction and the phase difference in the second direction, which are perpendicular to each other, can be calculated from the original image of the face area. And then determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and performing focusing on the human face region according to the target phase difference to obtain high focusing accuracy.
The electronic equipment comprises an image sensor, wherein the image sensor comprises a plurality of pixel point groups which are arranged in an array, and each pixel point group comprises a plurality of pixel points which are arranged in an array; each pixel point corresponds to one photosensitive unit; each pixel point comprises a plurality of sub pixel points arranged in an array; each sub-pixel point corresponds to a photodiode. The shape of the photodiode may be a circle, a square, or a sector, which is not limited in this application.
In the embodiment of the present application, compared to a conventional method for focusing a face region, only a phase difference in one direction can be acquired, and obviously, phase difference information that can be acquired in an originally smaller face region is more limited. Therefore, the phase difference in two directions can be calculated according to the luminance map collected by the image sensor in the foregoing embodiment, so that the information amount of the phase difference is doubled, and more detailed phase difference information can be acquired. Furthermore, the target phase difference is determined from the phase difference in the first direction and the phase difference in the second direction, and the target main body area is focused according to the target phase difference, so that the face area can be focused more accurately.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in the above-described flowcharts may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or the stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 14, there is provided a focus control apparatus 800 including: a body detection module 1420, a phase difference calculation module 1440 and a phase difference focusing module 1460. Wherein the content of the first and second substances,
a subject detection module 1420, configured to perform subject detection on the original image to obtain a target subject region;
the phase difference calculating module 1440 is configured to calculate a phase difference in a first direction and a phase difference in a second direction according to the original image of the target subject area, where a preset included angle is formed between the first direction and the second direction;
the phase difference focusing module 1460 is configured to determine a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and focus the target main body region according to the target phase difference.
In one embodiment, the phase difference calculation module 1440 further comprises:
the sub-brightness map acquisition unit is used for acquiring a sub-brightness map corresponding to each pixel group according to the brightness value of the sub-pixel at the same position of each pixel in the pixel group for each pixel group in the original image of the target main body region;
the target brightness graph acquisition unit is used for generating a target brightness graph according to the sub-brightness graph corresponding to each pixel point group;
and the first direction and second direction phase difference unit is used for calculating the phase difference of the first direction and the phase difference of the second direction according to the target brightness map.
In one embodiment, the phase difference unit in the first direction and the second direction is further configured to segment the target luminance map to obtain a first segmented luminance map and a second segmented luminance map, and determine a phase difference value of pixels matched with each other according to a position difference of the pixels matched with each other in the first segmented luminance map and the second segmented luminance map; and determining the phase difference value of the first direction and the phase difference value of the second direction according to the phase difference values of the matched pixels.
In an embodiment, the sub-luminance map obtaining unit is further configured to determine sub-pixel points at the same position from each pixel point to obtain a plurality of sub-pixel point sets, where positions of the sub-pixel points included in each sub-pixel point set in the pixel points are the same; for each sub-pixel point set, acquiring a brightness value corresponding to the sub-pixel point set according to the brightness value of each sub-pixel point in the sub-pixel point set; and generating a sub-brightness map according to the brightness value corresponding to each sub-pixel set.
In one embodiment, the sub-luminance map obtaining unit is further configured to determine a color coefficient corresponding to each sub-pixel point in the sub-pixel point set, where the color coefficient is determined according to a color channel corresponding to the sub-pixel point; multiplying a color coefficient corresponding to each sub-pixel point in the sub-pixel point set by the brightness value to obtain the weighted brightness of each sub-pixel point in the sub-pixel point set; and adding the weighted brightness of each sub-pixel point in the sub-pixel point set to obtain the brightness value corresponding to the sub-pixel point set.
In one embodiment, the phase difference focusing module 1460 includes: the device comprises a confidence coefficient acquisition unit, a defocusing distance determination unit and a focusing unit. Wherein the content of the first and second substances,
a confidence coefficient acquisition unit configured to acquire a first confidence coefficient of the phase difference in the first direction and a second confidence coefficient of the phase difference in the second direction;
the defocusing distance determining unit is used for selecting a larger phase difference from the first confidence coefficient and the second confidence coefficient as a target phase difference and determining a corresponding defocusing distance from the corresponding relation between the phase difference and the defocusing distance according to the target phase difference;
and the focusing unit is used for controlling the lens to move according to the defocusing distance so as to focus.
In one embodiment, the subject detection module 1420 is further configured to obtain a visible light map of the original image; generating a central weight map corresponding to a visible light map of the original image; inputting the visible light image and the central weight image into a main body detection model to obtain a main body region confidence image; and determining a target subject region in the original image according to the subject region confidence map.
In one embodiment, subject detection module 1420, includes:
the device comprises a main body detection unit, a main body detection unit and a control unit, wherein the main body detection unit is used for carrying out main body detection on an original image obtained by shooting electronic equipment to obtain at least two main body areas;
the size range determining unit of the target area is used for acquiring shooting data of a shot original image and size information of an image display interface of the electronic equipment, and determining the size range of the target area in the original image according to the shooting data of the original image and the size information of the image display interface of the electronic equipment;
and a target subject region selection unit configured to select a subject region corresponding to the size range from the plurality of subject regions, and to set the subject region corresponding to the size range as a target subject region.
In one embodiment, the target subject region includes a face region.
The division of the modules in the focusing control device is only used for illustration, and in other embodiments, the focusing control device may be divided into different modules as needed to complete all or part of the functions of the focusing control device.
Fig. 15 is a schematic internal structure diagram of an electronic device in one embodiment. As shown in fig. 15, the electronic device includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor for implementing a focus control method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
The implementation of each module in the focus control apparatus provided in the embodiments of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The process of the electronic device implementing the focus control method is as described in the above embodiments, and is not described herein again.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the focus control method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform a focus control method.
Any reference to memory, storage, database, or other medium used by embodiments of the present application may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (12)

1. A focusing control method is applied to electronic equipment, and is characterized by comprising the following steps:
performing main body detection on the original image to obtain a target main body area;
calculating a phase difference in a first direction and a phase difference in a second direction according to the original image of the target main body area, wherein a preset included angle is formed between the first direction and the second direction;
and determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and focusing the target main body area according to the target phase difference.
2. The method of claim 1, wherein the electronic device comprises an image sensor comprising a plurality of pixel groups arranged in an array, each pixel group comprising M x N pixels arranged in an array; each pixel point corresponds to one photosensitive unit; each pixel point comprises a plurality of sub pixel points arranged in an array, wherein M and N are both natural numbers which are more than or equal to 2;
the calculating a phase difference in a first direction and a phase difference in a second direction from the original image of the target subject region includes:
for each pixel point group in the original image of the target main body region, acquiring a sub-brightness map corresponding to the pixel point group according to the brightness value of a sub-pixel point at the same position of each pixel point in the pixel point group;
generating the target brightness graph according to the sub-brightness graph corresponding to each pixel point group;
and calculating the phase difference in the first direction and the phase difference in the second direction according to the target brightness map.
3. The method of claim 2, wherein said calculating a phase difference in a first direction and a phase difference in a second direction from said target luminance map comprises:
performing segmentation processing on the target brightness image to obtain a first segmentation brightness image and a second segmentation brightness image, and determining the phase difference value of the mutually matched pixels according to the position difference of the mutually matched pixels in the first segmentation brightness image and the second segmentation brightness image;
and determining the phase difference value in the first direction and the phase difference value in the second direction according to the phase difference values of the mutually matched pixels.
4. The method according to claim 2, wherein the obtaining the sub-luminance graph corresponding to the pixel point group according to the luminance value of the sub-pixel point at the same position of each pixel point in the pixel point group comprises:
determining sub-pixel points at the same position from each pixel point to obtain a plurality of sub-pixel point sets, wherein the positions of the sub-pixel points in the pixel points included in each sub-pixel point set are the same;
for each sub-pixel point set, acquiring a brightness value corresponding to the sub-pixel point set according to the brightness value of each sub-pixel point in the sub-pixel point set;
and generating the sub-brightness map according to the brightness value corresponding to each sub-pixel set.
5. The method according to claim 4, wherein the obtaining the brightness value corresponding to the set of sub-pixels according to the brightness value of each sub-pixel in the set of sub-pixels comprises:
determining a color coefficient corresponding to each sub-pixel point in the sub-pixel point set, wherein the color coefficient is determined according to a color channel corresponding to the sub-pixel point;
multiplying a color coefficient corresponding to each sub-pixel point in the sub-pixel point set by a brightness value to obtain the weighted brightness of each sub-pixel point in the sub-pixel point set;
and adding the weighted brightness of each sub-pixel point in the sub-pixel point set to obtain a brightness value corresponding to the sub-pixel point set.
6. The method of claim 1, wherein determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction, and focusing the target subject area according to the target phase difference comprises:
acquiring a first confidence coefficient of the phase difference in the first direction and a second confidence coefficient of the phase difference in the second direction;
selecting a larger phase difference between the first confidence coefficient and the second confidence coefficient as a target phase difference, and determining a corresponding defocusing distance from the corresponding relation between the phase difference and the defocusing distance according to the target phase difference;
and controlling the lens to move according to the defocusing distance so as to focus.
7. The method according to claim 1, wherein the performing subject detection on the original image to obtain a target subject region comprises:
acquiring a visible light image of an original image;
generating a central weight map corresponding to a visible light map of the original image;
inputting the visible light map and the central weight map into a subject detection model to obtain a subject region confidence map;
and determining a target subject region in the original image according to the subject region confidence map.
8. The method according to claim 1, wherein the performing subject detection on the original image to obtain a target subject region comprises:
performing main body detection on an original image to obtain at least two main body areas;
acquiring shooting data for shooting the original image and size information of an image display interface of the electronic equipment, and determining the size range of a target area in the original image according to the shooting data of the original image and the size information of the image display interface of the electronic equipment;
and screening out the main body regions conforming to the size range from the plurality of main body regions, and taking the main body regions conforming to the size range as target main body regions.
9. The method of claim 1, wherein the target subject region comprises a face region.
10. A focus control apparatus, characterized in that the apparatus comprises:
the main body detection module is used for carrying out main body detection on the original image to obtain a target main body area;
the phase difference calculation module is used for calculating a phase difference in a first direction and a phase difference in a second direction according to the original image of the target main body area, and the first direction and the second direction form a preset included angle;
and the phase difference focusing module is used for determining a target phase difference from the phase difference in the first direction and the phase difference in the second direction and focusing the target main body area according to the target phase difference.
11. An electronic device comprising a memory and a processor, the memory having a computer program stored thereon, wherein the computer program, when executed by the processor, causes the processor to perform the steps of the focus control method according to any one of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 9.
CN201911101407.2A 2019-11-12 2019-11-12 Focusing control method and device, electronic equipment and computer readable storage medium Active CN112866545B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911101407.2A CN112866545B (en) 2019-11-12 2019-11-12 Focusing control method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911101407.2A CN112866545B (en) 2019-11-12 2019-11-12 Focusing control method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112866545A true CN112866545A (en) 2021-05-28
CN112866545B CN112866545B (en) 2022-11-15

Family

ID=75984615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911101407.2A Active CN112866545B (en) 2019-11-12 2019-11-12 Focusing control method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112866545B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259596A (en) * 2021-07-14 2021-08-13 北京小米移动软件有限公司 Image generation method, phase detection focusing method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102164293A (en) * 2010-02-16 2011-08-24 索尼公司 Image processing device, image processing method, image processing program, and imaging device
CN103444184A (en) * 2011-03-24 2013-12-11 富士胶片株式会社 Color image sensor, imaging device, and control program for imaging device
CN103493484A (en) * 2011-03-31 2014-01-01 富士胶片株式会社 Image capturing device and image capturing method
CN106973206A (en) * 2017-04-28 2017-07-21 广东欧珀移动通信有限公司 Camera module image pickup processing method, device and terminal device
US20170359516A1 (en) * 2014-12-17 2017-12-14 Lg Innotek Co., Ltd. Image Acquiring Device and Portable Terminal Comprising Same and Image Acquiring Method of the Device
JP2018141908A (en) * 2017-02-28 2018-09-13 キヤノン株式会社 Focus detection device, focus control device, imaging device, focus detection method, and focus detection program
CN109905600A (en) * 2019-03-21 2019-06-18 上海创功通讯技术有限公司 Imaging method, imaging device and computer readable storage medium
CN110278376A (en) * 2019-07-03 2019-09-24 Oppo广东移动通信有限公司 Focusing method, complementary metal oxide image sensor, terminal and storage medium
CN110392211A (en) * 2019-07-22 2019-10-29 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102164293A (en) * 2010-02-16 2011-08-24 索尼公司 Image processing device, image processing method, image processing program, and imaging device
CN103444184A (en) * 2011-03-24 2013-12-11 富士胶片株式会社 Color image sensor, imaging device, and control program for imaging device
CN103493484A (en) * 2011-03-31 2014-01-01 富士胶片株式会社 Image capturing device and image capturing method
US20170359516A1 (en) * 2014-12-17 2017-12-14 Lg Innotek Co., Ltd. Image Acquiring Device and Portable Terminal Comprising Same and Image Acquiring Method of the Device
JP2018141908A (en) * 2017-02-28 2018-09-13 キヤノン株式会社 Focus detection device, focus control device, imaging device, focus detection method, and focus detection program
CN106973206A (en) * 2017-04-28 2017-07-21 广东欧珀移动通信有限公司 Camera module image pickup processing method, device and terminal device
CN109905600A (en) * 2019-03-21 2019-06-18 上海创功通讯技术有限公司 Imaging method, imaging device and computer readable storage medium
CN110278376A (en) * 2019-07-03 2019-09-24 Oppo广东移动通信有限公司 Focusing method, complementary metal oxide image sensor, terminal and storage medium
CN110392211A (en) * 2019-07-22 2019-10-29 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113259596A (en) * 2021-07-14 2021-08-13 北京小米移动软件有限公司 Image generation method, phase detection focusing method and device
CN113259596B (en) * 2021-07-14 2021-10-08 北京小米移动软件有限公司 Image generation method, phase detection focusing method and device

Also Published As

Publication number Publication date
CN112866545B (en) 2022-11-15

Similar Documents

Publication Publication Date Title
CN110248096B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866542B (en) Focus tracking method and apparatus, electronic device, and computer-readable storage medium
CN112866549B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110191287B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN110248101B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN110881103B (en) Focusing control method and device, electronic equipment and computer readable storage medium
KR20150107571A (en) Image pickup apparatus and image pickup method of generating image having depth information
CN107133982A (en) Depth map construction method, device and capture apparatus, terminal device
US20190052793A1 (en) Apparatus for generating a synthetic 2d image with an enhanced depth of field of an object
CN110490196A (en) Subject detection method and apparatus, electronic equipment, computer readable storage medium
JP6234401B2 (en) Image processing apparatus, imaging apparatus, image processing method, and program
CN112866510B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866511B (en) Imaging assembly, focusing method and device and electronic equipment
CN112866675B (en) Depth map generation method and device, electronic equipment and computer-readable storage medium
CN112866655B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112866545B (en) Focusing control method and device, electronic equipment and computer readable storage medium
CN110689007B (en) Subject recognition method and device, electronic equipment and computer-readable storage medium
CN112866548B (en) Phase difference acquisition method and device and electronic equipment
CN112866547B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866546B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866543B (en) Focusing control method and device, electronic equipment and computer readable storage medium
CN112862880A (en) Depth information acquisition method and device, electronic equipment and storage medium
CN112866544B (en) Phase difference acquisition method, device, equipment and storage medium
CN112866551B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866552B (en) Focusing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant