CN112866547A - Focusing method and device, electronic equipment and computer readable storage medium - Google Patents

Focusing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112866547A
CN112866547A CN201911101418.0A CN201911101418A CN112866547A CN 112866547 A CN112866547 A CN 112866547A CN 201911101418 A CN201911101418 A CN 201911101418A CN 112866547 A CN112866547 A CN 112866547A
Authority
CN
China
Prior art keywords
phase difference
sub
region
target
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911101418.0A
Other languages
Chinese (zh)
Other versions
CN112866547B (en
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911101418.0A priority Critical patent/CN112866547B/en
Publication of CN112866547A publication Critical patent/CN112866547A/en
Application granted granted Critical
Publication of CN112866547B publication Critical patent/CN112866547B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Focusing (AREA)

Abstract

The application relates to a focusing method and device, an electronic device and a computer readable storage medium. The method comprises the following steps: acquiring a region of interest selected in the first preview image; dividing the region of interest into at least two sub-regions; acquiring a first phase difference and a second phase difference corresponding to each sub-area in at least two sub-areas, wherein a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference form a preset included angle; determining a target sub-area corresponding to an object closest to the electronic equipment in the at least two sub-areas according to the first phase difference and the second phase difference corresponding to each sub-area; and focusing according to the target subarea. The method can improve the focusing accuracy.

Description

Focusing method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to a focusing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
When focusing is carried out, when a user wants to focus on a certain object, the user can manually click the screen position where the object is located, at the moment, the algorithm can obtain the screen position coordinates (x, y) clicked by the user, and the region of interest with the widths dx and dy in the horizontal direction and the vertical direction is respectively set by taking the screen position coordinates as the center. The corresponding region of interest after the user manually focuses on the background, contains too many backgrounds, and at this time, the background information may greatly affect the focusing result, and the focusing point may be on the background instead of the small object that the user wants to focus on, resulting in inaccurate focusing.
Disclosure of Invention
The embodiment of the application provides a focusing method, a focusing device, electronic equipment and a computer readable storage medium, which can improve the focusing accuracy.
A focusing method is applied to electronic equipment, the electronic equipment comprises an image sensor, the image sensor comprises a plurality of pixel point groups arranged in an array, and each pixel point group comprises M x N pixel points arranged in an array; each pixel point corresponds to a photosensitive unit, wherein M and N are both natural numbers greater than or equal to 2; the method comprises the following steps:
acquiring a region of interest selected in the first preview image;
dividing the region of interest into at least two sub-regions;
acquiring a first phase difference and a second phase difference corresponding to each sub-area in at least two sub-areas, wherein a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference form a preset included angle;
determining a target sub-area corresponding to an object closest to the electronic equipment in the at least two sub-areas according to the first phase difference and the second phase difference corresponding to each sub-area;
and focusing according to the target subarea.
A focusing device is applied to electronic equipment, wherein the electronic equipment comprises an image sensor, the image sensor comprises a plurality of pixel groups arranged in an array, and each pixel group comprises M x N pixels arranged in an array; each pixel point corresponds to a photosensitive unit, wherein M and N are both natural numbers greater than or equal to 2; the device comprises:
the first acquisition module is used for acquiring the region of interest selected in the first preview image;
the dividing module is used for dividing the region of interest into at least two sub-regions;
the second obtaining module is used for obtaining a first phase difference and a second phase difference corresponding to each sub-area in the at least two sub-areas, wherein a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference form a preset included angle;
the determining module is used for determining a target sub-area corresponding to an object closest to the camera in at least two sub-areas according to the first phase difference and the second phase difference corresponding to each sub-area;
and the focusing module is used for focusing according to the target sub-area.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of:
acquiring a region of interest selected in the first preview image;
dividing the region of interest into at least two sub-regions;
acquiring a first phase difference and a second phase difference corresponding to each sub-area in at least two sub-areas, wherein a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference form a preset included angle;
determining a target sub-area corresponding to an object closest to the electronic equipment in the at least two sub-areas according to the first phase difference and the second phase difference corresponding to each sub-area;
and focusing according to the target subarea.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a region of interest selected in the first preview image;
dividing the region of interest into at least two sub-regions;
acquiring a first phase difference and a second phase difference corresponding to each sub-area in at least two sub-areas, wherein a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference form a preset included angle;
determining a target sub-area corresponding to an object closest to the electronic equipment in the at least two sub-areas according to the first phase difference and the second phase difference corresponding to each sub-area;
and focusing according to the target subarea.
The focusing method and device, the electronic device and the computer readable storage medium obtain the region of interest selected in the first preview image, divide the region of interest into at least two sub-regions, and obtain the first phase difference and the second phase difference corresponding to each of the at least two sub-regions, wherein the first direction corresponding to the first phase difference and the second direction corresponding to the second phase difference form a preset included angle, the target sub-region corresponding to the object closest to the electronic device in the at least two sub-regions is determined according to the first phase difference and the second phase difference corresponding to each sub-region, the accuracy of the obtained phase difference can be improved through the two phase differences, so that the accuracy of the determined target sub-region is improved, the focusing is performed according to the target sub-region, that is, the focusing can be performed according to the object closest to the object, the focusing can be performed on a small object, and the focusing accuracy is improved, the target shooting object is clearer.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of an image processing circuit in one embodiment;
FIG. 2 is a flow chart of a focusing method in one embodiment;
FIG. 3 is a schematic diagram of phase focusing in one embodiment;
fig. 4 is a schematic diagram of arranging phase detection pixel points in pairs among pixel points included in an image sensor;
FIG. 5 is a schematic diagram of a portion of an electronic device in one embodiment;
FIG. 6 is a schematic diagram illustrating a portion of an image sensor 504, in accordance with one embodiment;
FIG. 7 is a diagram illustrating a structure of a pixel in one embodiment;
FIG. 8 is a schematic diagram showing an internal structure of an image sensor according to an embodiment;
FIG. 9 is a diagram illustrating an embodiment of a pixel Z;
FIG. 10 is a schematic diagram of a process for obtaining a first phase difference and a second phase difference according to one embodiment;
FIG. 11 is a diagram illustrating a slicing process performed on a target luminance graph in a first direction according to one embodiment;
FIG. 12 is a diagram illustrating a slicing process performed on a target luminance graph in a second direction in one embodiment;
FIG. 13 is a block diagram showing the structure of a focusing device in one embodiment;
fig. 14 is a schematic diagram of an internal structure of an electronic device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The embodiment of the application provides electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 1 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 1, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 1, the image processing circuit includes an ISP processor 140 and control logic 150. The image data captured by the imaging device 110 is first processed by the ISP processor 140, and the ISP processor 140 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of the imaging device 110. The imaging device 110 may include a camera having one or more lenses 112 and an image sensor 114. The image sensor 114 may include an array of color filters (e.g., Bayer filters), and the image sensor 114 may acquire light intensity and wavelength information captured with each imaging pixel of the image sensor 114 and provide a set of raw image data that may be processed by the ISP processor 140. The attitude sensor 120 (e.g., three-axis gyroscope, hall sensor, accelerometer) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 140 based on the type of interface of the attitude sensor 120. The attitude sensor 120 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
In addition, the image sensor 114 may also send raw image data to the attitude sensor 120, the sensor 120 may provide the raw image data to the ISP processor 140 based on the type of interface of the attitude sensor 120, or the attitude sensor 120 may store the raw image data in the image memory 130.
The ISP processor 140 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 140 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The ISP processor 140 may also receive image data from the image memory 130. For example, the attitude sensor 120 interface sends raw image data to the image memory 130, and the raw image data in the image memory 130 is then provided to the ISP processor 140 for processing. The image Memory 130 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the image sensor 114 interface or from the attitude sensor 120 interface or from the image memory 130, the ISP processor 140 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 130 for additional processing before being displayed. ISP processor 140 receives processed data from image memory 130 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 140 may be output to display 160 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the ISP processor 140 may also be sent to the image memory 130, and the display 160 may read image data from the image memory 130. In one embodiment, image memory 130 may be configured to implement one or more frame buffers.
The statistical data determined by the ISP processor 140 may be transmitted to the control logic 150 unit. For example, the statistical data may include image sensor 114 statistics such as gyroscope vibration frequency, auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 112 shading correction, and the like. The control logic 150 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 110 and control parameters of the ISP processor 140 based on the received statistical data. For example, the control parameters of the imaging device 110 may include attitude sensor 120 control parameters (e.g., gain, integration time of exposure control, anti-shake parameters, etc.), camera flash control parameters, camera anti-shake displacement parameters, lens 112 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 112 shading correction parameters.
In one embodiment, the image sensor 114 in the imaging device (camera) may include a plurality of pixel groups arranged in an array, wherein each pixel group includes a plurality of pixels arranged in an array, and each pixel includes a plurality of sub-pixels arranged in an array.
A first image is acquired through the lens 112 and the image sensor 114 in the imaging device (camera) 110 and sent to the ISP processor 140. After receiving the first image, the ISP processor 140 may perform main body detection on the first image to obtain the region of interest in the first image, or may obtain the region of interest by obtaining a region selected by the user as the region of interest, or by obtaining the region of interest in other manners, which is not limited to this.
An ISP processor 140 for acquiring a region of interest selected in the first preview image; dividing the region of interest into at least two sub-regions; acquiring a first phase difference and a second phase difference corresponding to each sub-area in at least two sub-areas, wherein a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference form a preset included angle; determining a target sub-area corresponding to an object closest to the electronic equipment in the at least two sub-areas according to the first phase difference and the second phase difference corresponding to each sub-area; focusing is performed according to the target sub-area, and related information of the target sub-area, such as position information, contour information, etc., may be sent to the control logic 150.
After receiving the information about the target area, the control logic 150 controls the lens 112 in the imaging device (camera) to move so as to focus on the position in the actual scene corresponding to the target area.
FIG. 2 is a flowchart of a focusing method in one embodiment. As shown in fig. 2, a focusing method is applied to an electronic device, and the focusing method includes steps 202 to 210.
Step 202, a region of interest selected in the first preview image is acquired.
The number of cameras of the electronic equipment is not limited. For example, the number of … may be 1 or 2, but is not limited thereto. The form of the camera provided in the electronic device is not limited, and for example, the camera may be a camera built in the electronic device or a camera externally provided to the electronic device. The camera can be a front camera or a rear camera. The camera on the electronic device may be any type of camera. For example, the camera may be a color camera, a black and white camera, a depth camera, a telephoto camera, a wide angle camera, etc., without being limited thereto.
The preview image may be a visible light image. The preview image refers to an image presented on a screen of the electronic device when the camera is not shooting. The first preview image may be a preview image of the current frame. The region of interest (ROI) is a region that is delineated from a processed image in a frame, circle, ellipse, irregular polygon, or the like in image processing. The region of interest may contain a background as well as objects.
Specifically, the electronic equipment acquires a first preview image and displays the first preview image on a display screen. The electronic equipment receives a trigger instruction of the first preview image, and acquires the region of interest selected by the user according to the trigger instruction. Here, the trigger instruction may be generated when the user clicks the first preview image, and the like, but is not limited thereto.
Step 204, the region of interest is divided into at least two sub-regions.
Wherein a sub-region refers to a region in the region of interest. The size and shape of each candidate region obtained by dividing the region of interest may be the same or different, and one of them may be the same or the other may be different. The specific division method is not limited.
In particular, the electronic device divides the region of interest into at least two sub-regions. The electronic device may divide the region of interest selected by the user into N × N sub-regions. Alternatively, the electronic device may divide the region of interest selected by the user into N × M sub-regions, and the like, without being limited thereto. N and M are both positive integers.
Step 206, a first phase difference and a second phase difference corresponding to each of at least two sub-regions are obtained, wherein a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference form a preset included angle.
The phase difference refers to a difference in position of an image formed by imaging light rays incident on the lens from different directions on the image sensor. The first direction and the second direction may form a preset included angle, and the preset included angle may be any angle other than 0 degrees, 180 degrees, and 360 degrees. Taking the first direction and the second direction forming an included angle of 90 degrees as an example, the first direction of the first phase difference may be an image vertical direction, and the second direction corresponding to the second phase difference may be an image horizontal direction. Alternatively, the first direction corresponding to the first phase difference may be an image horizontal direction, and the second direction corresponding to the second phase difference may be an image vertical direction. A first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference are perpendicular to each other. The first direction corresponding to the first phase difference is a 45-degree direction, and the second direction corresponding to the second phase difference is a 135-degree direction, etc., but the present invention is not limited thereto.
Specifically, the electronic device acquires a first phase difference and a second phase difference corresponding to each of the at least two sub-regions. For example, the region of interest is divided into 4 × 4 sub-regions, respectively a first sub-region, a second sub-region, …, and a sixteenth sub-region. The first sub-region corresponds to the first phase difference and the second phase difference. The second sub-region also has a corresponding first phase difference and second phase difference. The sixteenth sub-region also has a corresponding first phase difference and second phase difference.
And 208, determining a target sub-area corresponding to the object closest to the electronic equipment in the at least two sub-areas according to the first phase difference and the second phase difference corresponding to each sub-area.
The target sub-area refers to an area for focusing.
Specifically, the object closest to the electronic device is generally the object that the user wants to focus on. And the electronic equipment determines a target phase difference according to the first phase difference and the second phase difference corresponding to each sub-region, and determines a focus value according to the target phase difference.
For example, the defocus value can be calculated according to the following formula:
Defocus=PD×DCC
wherein, Defocus is a Defocus value, PD is a target phase difference, and DCC is a conversion coefficient.
And the electronic equipment determines a target sub-area closest to the electronic equipment in the at least two sub-areas according to the defocus value. For example, the region of interest is divided into 16 sub-regions, namely a first sub-region, a second sub-region, …, and a sixteenth sub-region, and the target sub-region determined according to the first phase difference and the second phase difference may be a seventh sub-region.
In this embodiment, the electronic device may directly use any one of the first phase difference and the second phase difference of each sub-region as the target phase difference of the sub-region, and determine, according to the target phase difference of each sub-region, a target sub-region corresponding to an object closest to the electronic device in the at least two sub-regions.
In this embodiment, the electronic device may use a phase difference with a smaller value of the first phase difference and the second phase difference corresponding to the sub-region as the target phase difference of the sub-region.
In this embodiment, the electronic device may average the first phase difference and the second phase difference of each sub-region, and use the average as the target phase difference of the sub-region. And determining a target area according to the target phase difference data of each sub-area.
In this embodiment, the electronic device may set different weights for the first phase difference and the second phase difference. And the electronic equipment determines the target phase difference of each subarea according to the first phase difference and the corresponding weight, the second phase difference and the corresponding weight. And determining a target area according to the target phase difference data of each sub-area.
And step 210, focusing according to the target sub-area.
The focusing refers to a process of changing the object distance and the distance position through a focusing mechanism of the electronic equipment to enable the shot object to be imaged clearly. Focusing may refer to auto-focusing. The Auto Focus may be referred to as Phase Detection Auto Focus (PDAF). The phase detection automatic focusing is to obtain a phase difference through a sensor, calculate an out-of-Focus Value according to the phase difference, control the lens to move according to the out-of-Focus Value, and then search a Focus Value (FV for short) peak Value.
Specifically, the electronic device may control the camera to focus according to the target sub-region. Namely, the electronic equipment adjusts the lens according to the focusing area to carry out automatic focusing.
The focusing method in this embodiment obtains an area of interest selected in a first preview image, divides the area of interest into at least two sub-areas, obtains a first phase difference and a second phase difference corresponding to each of the at least two sub-areas, wherein a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference form a preset included angle, determines a target sub-area corresponding to an object closest to an electronic device in the at least two sub-areas according to the first phase difference and the second phase difference corresponding to each of the at least two sub-areas, compared with a conventional technology in which only one phase difference exists in a horizontal direction, can improve accuracy of the obtained phase difference by the two phase differences, thereby improving accuracy of the determined target sub-area, performs focusing according to the target sub-area, that is, performs focusing according to an object closest to the target sub-area, and can focus on a small object, the focusing accuracy is improved, and the target shot object is clearer.
In one embodiment, fig. 3 is a schematic diagram of phase focusing in one embodiment. M1 is the position of the image sensor when the electronic device is in the in-focus state, wherein the in-focus state refers to the successful in-focus state, please refer to fig. 3, when the image sensor is at the position M1, the imaging light rays g reflected by the object W to the Lens in different directions converge on the image sensor, that is, the imaging light rays g reflected by the object W to the Lens in different directions form an image at the same position on the image sensor, and at this time, the image sensor forms a clear image.
M2 and M3 are positions where the image sensor may be located when the electronic device is not in the in-focus state, and as shown in fig. 3, when the image sensor is located at the M2 position or the M3 position, the imaging light rays g reflected by the object W to the Lens in different directions will be imaged at different positions. Referring to fig. 3, when the image sensor is located at the position M2, the imaging light rays g reflected by the object W in different directions toward the Lens are imaged at the position a and the position B, respectively, and when the image sensor is located at the position M3, the imaging light rays g reflected by the object W in different directions toward the Lens are imaged at the position C and the position D, respectively, and at this time, the image sensor is not clear.
In the PDAF technique, the difference in the position of the image formed by the imaging light rays entering the lens from different directions in the image sensor can be obtained, for example, as shown in fig. 3, the difference between the position a and the position B, or the difference between the position C and the position D can be obtained; after acquiring the difference of the positions of images formed by imaging light rays entering the lens from different directions in the image sensor, obtaining an out-of-focus value according to the difference and the geometric relationship between the lens and the image sensor in the camera, wherein the out-of-focus value refers to the distance between the current position of the image sensor and the position where the image sensor is supposed to be in an in-focus state; the electronic device can focus according to the obtained defocus value.
Here, the Difference in the position of an image formed by imaging light rays entering the lens from different directions on the image sensor may be generally referred to as a Phase Difference (Phase Difference). As can be seen from the above description, in the PDAF technology, obtaining the phase difference is a very critical technical link.
It should be noted that in practical applications, the phase difference can be applied to a plurality of different scenes, and the focusing scene is only one possible scene. For example, the phase difference may be applied to the scene of acquiring the depth map, that is, the depth map may be acquired by using the phase difference; for another example, the phase difference may be used in a reconstruction scene of a three-dimensional image, that is, the three-dimensional image may be reconstructed using the phase difference. The embodiment of the present application is directed to provide a method for acquiring a phase difference, and as to which scene the phase difference is applied after the phase difference is acquired, the embodiment of the present application is not particularly limited.
In the related art, some phase detection pixels may be arranged in pairs among the pixels included in the image sensor, please refer to fig. 4. Fig. 4 is a schematic diagram of arranging phase detection pixel points in pairs among pixel points included in an image sensor. As shown in fig. 4, a phase detection pixel point pair (hereinafter, referred to as a pixel point pair) a, a pixel point pair B, and a pixel point pair C may be provided in the image sensor. In each pixel point pair, one phase detection pixel point performs Left shielding (English), and the other phase detection pixel point performs Right shielding (English).
For the phase detection pixel point which is shielded on the left side, only the light beam on the right side in the imaging light beam which is emitted to the phase detection pixel point can image on the photosensitive part (namely, the part which is not shielded) of the phase detection pixel point, and for the phase detection pixel point which is shielded on the right side, only the light beam on the left side in the imaging light beam which is emitted to the phase detection pixel point can image on the photosensitive part (namely, the part which is not shielded) of the phase detection pixel point. Therefore, the imaging light beam can be divided into a left part and a right part, and the phase difference can be obtained by comparing images formed by the left part and the right part of the imaging light beam.
However, when an object having only a horizontal texture is photographed, for example, when a horizontal line is photographed, images formed by the left and right imaging beams remain the same, and an accurate phase difference cannot be obtained, so that focusing cannot be performed accurately.
Therefore, an embodiment of the present application provides a focusing method, which is capable of acquiring a first phase difference and a second phase difference corresponding to each of at least two sub-regions, where a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference form a preset included angle. For example, when the first direction corresponding to the first phase difference is a horizontal direction and the second direction corresponding to the second phase difference is a vertical direction, for an object of a horizontal texture, the second direction corresponding to the second phase difference may be used, so that a more accurate phase difference may be determined.
In one embodiment, determining a target sub-region corresponding to an object closest to the electronic device in the at least two sub-regions according to the first phase difference and the second phase difference corresponding to each sub-region includes:
processing the first phase difference corresponding to each subregion to obtain a first defocusing value, and processing the second phase difference corresponding to each subregion to obtain a second defocusing value;
and determining a target sub-region corresponding to the object closest to the camera in the at least two sub-regions according to the first focusing value and the second focusing value corresponding to each region.
The defocus value corresponding to the sub-region refers to a distance value between a position where the sub-region is imaged and a position where the focus is in an in-focus state. For example, the larger the defocus value, the farther the position at which the sub-region is imaged from the focus; the smaller the defocus value, the closer the position at which the sub-region is imaged to the focus. When the area out-of-focus value is 0, it indicates that the image sensor is focused on the sub-area, and the sub-area is in a focused position.
Each phase difference has a corresponding defocus value. The defocus values corresponding to each phase difference may be the same or different. The relationship between the phase difference and the defocus value can be obtained by pre-calibration. For example, the relationship between the phase difference and the defocus value may be linear or nonlinear. The processing sequence of the first phase difference and the second phase difference is not limited, and the first phase difference can be processed first, and then the second phase difference can be processed; the second phase difference may be processed first, and then the first phase difference may be processed; the first phase difference and the second phase difference of the first sub-region may be processed first, and then the first phase difference and the second phase difference of the second sub-region may be processed. The value of the first phase difference may be the same as or different from the value of the second phase difference in the same sub-region.
Specifically, the electronic device processes the first phase difference corresponding to each sub-region according to a relationship between the first phase difference and the first defocus value, so as to obtain the first defocus value. And the electronic equipment processes the second phase difference corresponding to each sub-region according to the relationship between the second phase difference and the second defocus value to obtain the second defocus value. And the electronic equipment obtains a target focusing value corresponding to the object closest to the camera according to the first focusing value and the second focusing value corresponding to each region. And the electronic equipment determines a target sub-region corresponding to the object closest to the camera in the at least two sub-regions according to the target focusing value.
In the focusing method in this embodiment, the first phase difference corresponding to each sub-region is processed to obtain a first out-of-focus value, the second phase difference corresponding to each sub-region is processed to obtain a second out-of-focus value, and the target sub-region corresponding to the object closest to the camera in the at least two sub-regions is determined according to the first out-of-focus value and the second out-of-focus value corresponding to each region, so that the target sub-region corresponding to the object closest to the camera in the at least two sub-regions can be found according to the first out-of-focus value and the second out-of-focus value, that is, focusing can be performed according to the object closest to the camera, thereby improving focusing accuracy and making the target shot object clearer.
In one embodiment, the processing the first phase difference corresponding to each sub-region to obtain a first defocus value, and the processing the second phase difference corresponding to each sub-region to obtain a second defocus value includes:
and (a1) acquiring the credibility of the first phase difference and the credibility of the second phase difference corresponding to each subregion.
And the credibility is used for representing whether the phase difference is accurate or not. The higher the reliability is, the more accurate the phase difference is; the lower the confidence level, the less accurate the phase difference.
In this embodiment, taking the calculation of the horizontal phase difference as an example, the phase difference of a certain line coordinate x in the image is calculated, the luminance values of 5 pixel points in the left image x-2, x-1, x, x +1, and x +2 are taken, and the right image is moved, where the moving range may be-10 to + 10. Namely:
performing similar comparison on the brightness values Rx-12, Rx-11, Rx-10, Rx-9, Rx-8 and x-2, x-1, x, x +1, x +2 of the right image;
performing similar comparison on the brightness values Rx-11, Rx-10, Rx-9, Rx-8, Rx-7 and x-2, x-1, x, x +1, x +2 of the right image;
……
performing similar comparison on the brightness values Rx-2, Rx-1, Rx, Rx +1, Rx +2 and x-2, x-1, x, x +1, x +2 of the right image;
performing similar comparison on brightness values Rx-1, Rx, Rx +1, Rx +2, Rx +3 and x-2, x-1, x, x +1 and x +2 of the right image;
……
similar comparisons were made for right image luminance values Rx +7, Rx +8, Rx +9, Rx +10, Rx +11 and x-2, x-1, x, x +1, x +2
Similar comparisons were made for the right image luminance values Rx +8, Rx +9, Rx +10, Rx +11, Rx +12 and x-2, x-1, x, x +1, x + 2.
Taking the five pixel point values of the right image as Rx-2,Rx-1,Rx,Rx+1,Rx+2Five pixel point values of the left image are x-2,x-1,x,x+1,x+2For example, the degree of similarity matching may be | Rx-2-x-2|+|Rx-1-x-1|+|Rx-x|+|Rx+1--x+1|+|Rx+2-x+2L. Value of degree of similarity matchThe smaller, the higher the similarity. The higher the similarity, the higher the confidence. Similar pixel point values can be used as matched pixel points to obtain phase difference. For the upper and lower images, the brightness values of a row of pixel points in the upper image and the brightness values of a row of pixel points with the same quantity in the lower image are compared similarly. The reliability obtaining process of the upper and lower figures is similar to that of the left and right figures, and is not described in detail here.
Specifically, the electronic device obtains the reliability of the first phase difference corresponding to each sub-region, and the reliability of the second phase difference corresponding to each sub-region.
And (a2) determining a reference phase difference which is greater than or equal to the first reliability threshold value according to the reliability of the first phase difference.
The reliability threshold refers to a critical value of the reliability pre-stored in the electronic device. The reference phase difference refers to a phase difference that satisfies at least one of a phase difference greater than or equal to the first reliability threshold and a phase difference greater than or equal to the second reliability threshold.
Specifically, the electronic device determines a reference phase difference corresponding to a first reliability threshold or more from the reliability of the first phase difference. And the electronic equipment rejects the phase difference with the reliability smaller than the first reliability threshold value in the first phase difference.
And (a3) determining a reference phase difference which is greater than or equal to the second reliability threshold value according to the reliability of the second phase difference.
The second confidence threshold may be the same as or different from the first confidence threshold.
Specifically, the electronic device determines, from the degrees of reliability of the second phase difference, a reference phase difference corresponding to a second degree of reliability threshold or greater. And the electronic equipment rejects the phase difference of which the reliability is smaller than the second reliability threshold value in the second phase difference.
And (a4) determining a reference sub-region corresponding to the reference phase difference.
The reference phase difference may be a first phase difference that satisfies a first confidence threshold or more, or may be a second phase difference that satisfies a second confidence threshold or more.
In particular, there may be duplication of reference sub-regions corresponding to a first phase difference that satisfies a first confidence threshold or greater, and reference sub-regions corresponding to a second phase difference that satisfies a second confidence threshold or greater. The electronic equipment performs duplication elimination on the sub-area corresponding to the reference phase difference, and determines a reference sub-area corresponding to the reference phase difference.
And (a5) processing the first phase difference corresponding to the reference sub-region to obtain a first defocus value, and processing the second phase difference corresponding to the reference sub-region to obtain a second defocus value.
Specifically, the electronic device obtains a first phase difference corresponding to the reference sub-region, and processes the first phase difference corresponding to the reference sub-region to obtain a first defocus value. And processing the second phase difference corresponding to the reference sub-region to obtain a second defocusing value.
Determining a target sub-region corresponding to an object closest to the camera in the at least two sub-regions according to the first focus value and the second focus value of each region, including:
and (a6) determining a target sub-area corresponding to the object closest to the camera in the at least two sub-areas according to the first out-of-focus value and the second out-of-focus value corresponding to the reference sub-area.
Specifically, the electronic device ranks the first defocus value and the second defocus value corresponding to the reference sub-region, and determines a target sub-region corresponding to an object closest to the camera in the at least two sub-regions.
The focusing method in this embodiment obtains the reliability of the first phase difference and the reliability of the second phase difference corresponding to each sub-region, determines a reference phase difference that is greater than or equal to a first reliability threshold according to the reliability of the first phase difference, determines a reference phase difference that is greater than or equal to a second reliability threshold according to the reliability of the second phase difference, determines a reference sub-region corresponding to the reference phase difference, processes the first phase difference corresponding to the reference sub-region to obtain a first defocus value, processes the second phase difference corresponding to the reference sub-region to obtain a second defocus value, determines a target sub-region corresponding to an object closest to the camera in at least two sub-regions according to the first defocus value and the second defocus value corresponding to the reference sub-region, can screen out the phase difference with higher reliability first through the reliability, and then determines the target sub-region for focusing, the focusing accuracy is improved.
In one embodiment, the processing the first phase difference corresponding to the reference sub-region to obtain the first defocus value, and the processing the second phase difference corresponding to the reference sub-region to obtain the second defocus value includes:
acquiring a first conversion coefficient corresponding to the first phase difference and a second conversion coefficient corresponding to the second phase difference;
processing a first phase difference corresponding to the reference sub-region according to the first conversion coefficient to obtain a first out-of-focus value corresponding to the reference sub-region;
and processing the second phase difference corresponding to the reference sub-region according to the second conversion coefficient to obtain a second defocus value corresponding to the reference sub-region.
Wherein the conversion factor may be used to convert the phase difference to a defocus value. The conversion factor may be obtained by pre-calibration of the image processor. The first conversion factor and the second conversion factor may be the same or different.
Specifically, the electronic device obtains a first conversion coefficient corresponding to the first phase difference and a second conversion coefficient corresponding to the second phase difference. And the electronic equipment multiplies the first conversion coefficient by the first phase difference corresponding to the reference sub-area to obtain a first defocusing value corresponding to the reference sub-area. And the electronic equipment multiplies the second conversion coefficient by the second phase difference corresponding to the reference sub-region to obtain a second defocusing value corresponding to the reference sub-region. For example, the first defocus value and the second defocus value can be calculated according to the following formula: dfocus ═ PD × DCC. Wherein, Defocus is a Defocus value, PD is a phase difference, and DCC is a conversion coefficient.
Or the electronic equipment processes according to the first conversion coefficient, the first phase difference corresponding to the reference sub-region and the constant value corresponding to the first phase difference to obtain a first defocus value. And the electronic equipment processes according to the second conversion coefficient, the second phase difference corresponding to the reference sub-region and the constant value corresponding to the second phase difference to obtain a second defocusing value.
In the focusing method in this embodiment, a first transformation coefficient corresponding to the first phase difference and a second transformation coefficient corresponding to the second phase difference are obtained, the first phase difference corresponding to the reference sub-region is processed according to the first transformation coefficient to obtain a first defocus value corresponding to the reference sub-region, and the second phase difference corresponding to the reference sub-region is processed according to the second transformation coefficient to obtain a second defocus value corresponding to the reference sub-region.
In one embodiment, determining a target sub-region corresponding to an object closest to the camera in the at least two sub-regions according to the first out-of-focus value and the second out-of-focus value corresponding to the reference sub-region includes: determining a target focus value corresponding to the closest distance to the electronic equipment from the first focus value and the second focus value corresponding to the reference sub-region; and taking the sub-region corresponding to the target out-of-focus value as a target sub-region.
Wherein, the defocusing value has a positive sign and a negative sign, and the positive sign and the negative sign respectively represent the lens to two different moving directions. For example, a movement in a direction close to the object is positive and a movement in a direction away from the object is negative. Alternatively, the movement in the direction close to the object is a negative sign, and the movement in the direction away from the object is a positive sign. Therefore, the target defocus value corresponding to the closest distance to the electronic device may be the maximum defocus value or may be the minimum defocus value.
Specifically, the electronic device determines a target defocus value corresponding to the electronic device closest from the first and second defocus values corresponding to the reference sub-region. The target defocus value may be the first defocus value or the second defocus value. And the electronic equipment takes the sub-region corresponding to the target defocusing value as a target sub-region.
In this embodiment, for example, during calibration, the focusing position of the lens is 0, the first object is 2 meters away from the electronic device, the second object is 5 meters away from the electronic device, and the third object is 9 meters away from the electronic device, and both are within the region of interest. The defocus value calculated by the electronic device for the first object may be-2, the defocus value calculated by the electronic device for the second object may be 2, and the defocus value calculated by the electronic device for the third object may be 5, wherein the target defocus value corresponding to the closest distance to the electronic device is-2. I.e. the target defocus value is the minimum of the defocus values. The relationship between the distance and the defocus value can be obtained according to the above process, that is, the farther the distance is, the larger the defocus value is.
In this embodiment, for example, during calibration, the focusing position of the lens is 0, the first object is 2 meters away from the electronic device, the second object is 5 meters away from the electronic device, and the third object is 9 meters away from the electronic device, and both are within the region of interest. The defocus value calculated by the electronic device for the first object may be 5, the defocus value calculated by the electronic device for the second object may be-1, and the defocus value calculated by the electronic device for the third object may be-9, where the target defocus value corresponding to the closest distance to the electronic device is 5. I.e. the target defocus value is the maximum of the defocus values. The relationship between the distance and the defocus value can be obtained according to the above process, that is, the farther the distance, the smaller the defocus value.
In the focusing method in this embodiment, the target focus value closest to the electronic device is determined from the first focus value and the second focus value corresponding to the reference sub-region, the sub-region corresponding to the target focus value is used as the target sub-region, and the target focus value closest to the electronic device can be obtained in the region of interest, so that the target sub-region where the target object is located is obtained, and focusing is performed according to the target sub-region, so that focusing accuracy can be improved, and the target shot object is clearer.
In one embodiment, determining a target sub-region corresponding to an object closest to the camera in the at least two sub-regions according to the first phase difference and the second phase difference corresponding to each sub-region includes: processing the first phase difference corresponding to each sub-region to obtain a first defocusing value corresponding to each sub-region, and processing the second phase difference corresponding to each region to obtain a second defocusing value corresponding to each sub-region; determining a target focus value corresponding to the closest distance to the electronic equipment from the first focus value and the second focus value corresponding to each sub-region; and taking the sub-region corresponding to the target out-of-focus value as a target sub-region.
In the focusing method in this embodiment, the first phase difference corresponding to each sub-region is processed to obtain the first defocus value corresponding to each sub-region, the second phase difference corresponding to each sub-region is processed to obtain the second defocus value corresponding to each sub-region, the target defocus value closest to the electronic device is determined from the first defocus value and the second defocus value corresponding to each sub-region, and the sub-region corresponding to the target defocus value is used as the target sub-region.
In one embodiment, an electronic device includes an image sensor including a plurality of pixel groups arranged in an array, each of the pixel groups including a plurality of pixels arranged in an array.
In this embodiment, fig. 5 is a schematic structural diagram of a part of an electronic device in one embodiment. As shown in fig. 5, the electronic Device may include a lens 502 and an image sensor 504, wherein the lens 502 may be composed of a series of lenses, and the image sensor 504 may be a Metal Oxide Semiconductor (CMOS) image sensor, a Charge-coupled Device (CCD), a quantum thin film sensor, an organic sensor, or the like.
Referring to fig. 6, which shows a schematic structural diagram of a portion of the image sensor 504, as shown in fig. 6, the image sensor 504 may include a plurality of pixel groups Z arranged in an array, where each pixel group Z includes a plurality of pixels D arranged in an array, and each pixel includes a plurality of sub-pixels D arranged in an array. Referring to fig. 6, optionally, each pixel group Z may include 4 pixels D arranged in an array arrangement manner of two rows and two columns, and each pixel may include 4 sub-pixels D arranged in an array arrangement manner of two rows and two columns.
It should be noted that the pixel included in the image sensor 504 refers to a photosensitive unit, and the photosensitive unit may be composed of a plurality of photosensitive elements (i.e., sub-pixels) arranged in an array, where the photosensitive element is an element capable of converting an optical signal into an electrical signal. Optionally, the light sensing unit may further include a microlens, a filter, and the like, where the microlens is disposed on the filter, the filter is disposed on each light sensing element included in the light sensing unit, and the filter may include three types of red, green, and blue, and only can transmit light with wavelengths corresponding to the red, green, and blue, respectively.
Fig. 7 is a schematic structural diagram of a pixel in an embodiment. As shown in fig. 7, taking each pixel point including a sub-pixel point 1, a sub-pixel point 2, a sub-pixel point 3, and a sub-pixel point 4 as an example, the sub-pixel point 1 and the sub-pixel point 2 can be synthesized, and the sub-pixel point 3 and the sub-pixel point 4 are synthesized to form a PD pixel pair in the up-down direction, so as to obtain a phase difference in the vertical direction, and detect a horizontal edge; and synthesizing the sub-pixel point 1 and the sub-pixel point 3, and synthesizing the sub-pixel point 2 and the sub-pixel point 4 to form a PD pixel pair in the left and right directions, so as to obtain a phase difference in the horizontal direction and detect a vertical edge.
Fig. 8 is a schematic diagram showing an internal structure of an image sensor in one embodiment, and an imaging device includes a lens and an image sensor. As shown in fig. 8, the image sensor includes a microlens 80, an optical filter 82, and a photosensitive cell 84. The micro lens 80, the filter 82 and the photosensitive unit 84 are sequentially located on the incident light path, that is, the micro lens 80 is disposed on the filter 82, and the filter 82 is disposed on the photosensitive unit 84.
The filter 82 may include three types of red, green and blue, which only transmit the light with the wavelengths corresponding to the red, green and blue colors, respectively. A filter 82 is disposed on one pixel site.
The micro-lens 80 is used to receive incident light and transmit the incident light to the optical filter 82. The filter 82 smoothes incident light, and then the smoothed light is incident on the light receiving unit 84 on a pixel basis.
The light sensing unit 84 in the image sensor converts light incident from the optical filter 82 into a charge signal by the photoelectric effect, and generates a pixel signal in accordance with the charge signal. The charge signal corresponds to the received light intensity.
As can be seen from the above description, the pixel point included in the image sensor and the pixel included in the image are two different concepts, wherein the pixel included in the image refers to the minimum unit of the image, which is generally represented by a number sequence, and the number sequence can be generally referred to as the pixel value of the pixel. In the embodiment of the present application, both concepts of "pixel points included in an image sensor" and "pixels included in an image" are related, and for the convenience of understanding of readers, the description is briefly made here.
Please refer to fig. 9, which illustrates a schematic diagram of an exemplary pixel group Z, as shown in fig. 9, the pixel group Z includes 4 pixels D arranged in an array arrangement manner of two rows and two columns, wherein a color channel of a pixel in a first row and a first column is green, that is, a color filter included in a pixel in the first row and the first column is a green color filter, a color channel of a pixel in a second row and the second column is red, that is, a color filter included in a pixel in the first row and the second column is a red color filter, a color channel of a pixel in the first column is blue, that is, a color filter included in a pixel in the second row and the first column is a blue color filter, a color channel of a pixel in the second row and the second column is green, that is, a color filter included in a pixel in the second row and the second column is a green color filter.
In one embodiment, fig. 10 is a schematic flow chart of acquiring the first phase difference and the second phase difference in one embodiment. As shown in fig. 10, acquiring a first phase difference and a second phase difference corresponding to each of at least two sub-regions includes:
step 1002, acquiring a target brightness map corresponding to the region of interest according to the brightness values of the pixels included in each pixel group in the image sensor.
The brightness value of the pixel point of the image sensor can be represented by the brightness value of the sub-pixel point included in the pixel point. That is, the electronic device can obtain the target brightness map according to the brightness values of the sub-pixel points in the pixel points included in each pixel point group in the region of interest. The brightness value of the sub-pixel point refers to the brightness value of the optical signal received by the sub-pixel point.
Specifically, the electronic device acquires pixel brightness values corresponding to the region of interest according to pixel brightness values included in each pixel group in the image sensor, and obtains a target brightness map corresponding to the region of interest.
And 1004, performing segmentation processing on the target brightness graph in the first direction to obtain a first brightness segmentation graph and a second brightness segmentation graph.
Specifically, when the first direction is the horizontal direction, that is, the target luminance map is subjected to the segmentation process in the horizontal direction, and the obtained first luminance segmentation map and the second luminance segmentation map may be referred to as an upper map and a lower map, respectively.
When the first direction is the vertical direction, the target luminance graph is subjected to segmentation processing in the first direction, and the obtained first segmentation luminance graph and the obtained second segmentation luminance graph can be respectively called as a left graph and a right graph.
Step 1006, determining a second phase difference according to the position difference of the matched pixels in the first luminance segmentation map and the second luminance segmentation map.
The matched pixels respectively correspond to different images formed by imaging light rays entering the lens from different directions in the image sensor, so that the phase difference of the matched pixels can be determined according to the position difference of the matched pixels. The matched pixels may comprise one or more pixels. For example, the point a to be detected in the first segmentation luminance map and the point B to be detected in the second segmentation luminance map are matched with each other, where the point a to be detected may correspond to the image formed at the position a in fig. 3, and the point B to be detected may correspond to the image formed at the position B in fig. 3. The second direction corresponding to the second phase difference is perpendicular to the first direction.
Specifically, taking the first direction as the horizontal direction as an example, the obtained first luminance cut chart and the second luminance cut chart are an upper chart and a lower chart. The electronic device obtains a second phase difference in the vertical direction according to the position difference of the mutually matched pixels in the first luminance segmentation map and the second luminance segmentation map.
Taking the first direction as a vertical direction as an example, the obtained first luminance segmentation map and the second luminance segmentation map are a left map and a right map. The electronic device obtains a second phase difference in the horizontal direction according to the position difference of the mutually matched pixels in the first luminance segmentation map and the second luminance segmentation map.
For example, a pixel matrix with 3 rows and 3 columns in the first luminance partition map is used as a point a to be detected, and the pixel value of the point a to be detected is:
2 10 90
1 20 80
0 100 1
another pixel matrix with 3 rows and 3 columns in the second luminance segmentation map is used as another point b to be detected, and the pixel value of the point b to be detected is as follows:
1 10 90
1 21 80
0 100 2
as can be seen from the above, the two matrices are similar, and the point a to be detected and the point b to be detected can be considered to match each other. As for how to judge whether pixel matrixes are similar, there are many different methods in practical application, and a common method is to calculate the difference of pixel values of each corresponding pixel in two pixel matrixes, add the absolute values of the calculated difference values, and judge whether the pixel matrixes are similar by using the addition result, that is, if the addition result is smaller than a preset threshold, the pixel matrixes are considered to be similar, otherwise, the pixel matrixes are considered to be dissimilar.
For example, for the two pixel matrices of 3 rows and 3 columns, 1 and 2 are subtracted, 10 and 10 are subtracted, 90 and 90 are subtracted, … … are added, and the absolute values of the obtained differences are added to obtain an addition result of 3, and if the addition result 3 is smaller than a preset threshold, the two pixel matrices of 3 rows and 3 columns are considered to be similar.
Another common method for judging whether pixel matrices are similar is to extract edge features thereof by using a sobel convolution kernel calculation mode or a high laplacian calculation mode, and the like, and judge whether pixel matrices are similar through the edge features.
And step 1008, performing segmentation processing on the target brightness graph in the second direction to obtain a third brightness segmentation graph and a fourth brightness segmentation graph.
Specifically, for example, when the first direction is a horizontal direction, the second direction is a vertical direction, and a third cut luminance graph and a fourth cut luminance graph obtained by cutting the target luminance graph in the second direction may be respectively referred to as a left graph and a right graph.
Taking the first direction as the vertical direction as an example, the second direction is the horizontal direction, and the third and fourth split luminance maps obtained by splitting the target luminance map in the second direction may be referred to as an upper map and a lower map, respectively.
Step 1010, determining a first phase difference according to the position difference of the points to be detected, which are matched with each other in the third luminance segmentation graph and the fourth luminance segmentation graph.
Specifically, taking the first direction as the horizontal direction as an example, the second direction is the vertical direction, and the obtained third luminance segmentation map and the fourth luminance segmentation map are the left map and the right map. The electronic device obtains a second phase difference in the horizontal direction according to the position difference of the mutually matched pixels in the third luminance segmentation map and the fourth luminance segmentation map.
Taking the first direction as the vertical direction as an example, the second direction is the horizontal direction, and the obtained third luminance segmentation map and the fourth luminance segmentation map are an upper map and a lower map. The electronic device obtains a second phase difference in the vertical direction according to the position difference of the mutually matched pixels in the third luminance segmentation map and the fourth luminance segmentation map.
The focusing method in this embodiment obtains a target brightness map corresponding to the region of interest according to the brightness values of the pixels included in each pixel group in the image sensor, performing segmentation processing on the target brightness graph in a first direction to obtain a first brightness segmentation graph and a second brightness segmentation graph, determining a second phase difference based on the position difference of the pixels matched with each other in the first and second luminance slice maps, performing segmentation processing on the target brightness graph in a second direction to obtain a third brightness segmentation graph and a fourth brightness segmentation graph, compared with the traditional phase difference acquisition mode, the area of interest of the embodiment contains abundant phase difference information, and the accuracy of the acquired phase difference can be improved.
In this embodiment, fig. 11 is a schematic diagram illustrating a target luminance graph is subjected to a slicing process in a first direction in one embodiment. Fig. 12 is a diagram illustrating a slicing process performed on a target luminance graph in a second direction according to an embodiment. As shown in fig. 11, assuming that the target luminance map includes 6 rows and 6 columns of pixels, in the case of performing column-by-column slicing of the target luminance map, the slicing process is performed on the target luminance map in the first direction. The electronic device may determine the 1 st, 3 rd and 5 th columns of pixels of the target luminance map as the first luminance map region, and may determine the 2 nd, 4 th and 6 th columns of pixels of the target luminance map as the second luminance map region. Then, the electronic device may stitch the first luminance map regions to obtain a first cut-luminance map T1, where the first cut-luminance map T1 includes the 1 st, 3 rd, and 5 th columns of pixels of the target luminance map. The electronic device may stitch the second luminance map regions to obtain a second sliced luminance map T2, where the second sliced luminance map T2 includes the 2 nd, 4 th, and 6 th columns of pixels of the target luminance map.
As shown in fig. 12, assuming that the target luminance map includes 6 rows and 6 columns of pixels, the target luminance map is subjected to the slicing process in the second direction in the case of slicing the target luminance map row by row. The electronic device may determine the pixels in the 1 st row, the pixels in the 3 rd row, and the pixels in the 5 th row of the target luminance map as a first luminance map region, may determine the pixels in the 2 nd row, the pixels in the 4 th row, and the pixels in the 6 th row of the target luminance map as a second luminance map region, and then, the electronic device may stitch the first luminance map regions to obtain a first cut-off luminance map T3, where the first cut-off luminance map T3 includes the pixels in the 1 st row, the pixels in the 3 rd row, and the pixels in the 5 th row of the target luminance map. The electronic device may stitch the second luminance map regions to obtain a second sliced luminance map T4, where the second sliced luminance map T4 includes the 2 nd, 4 th, and 6 th rows of pixels of the target luminance map.
In one embodiment, focusing according to the target sub-area comprises: acquiring the credibility of the first phase difference and the credibility of the second phase difference corresponding to the target sub-region; determining the phase difference with the highest reliability from the reliability of the first phase difference and the reliability of the second phase difference; and controlling the electronic equipment to focus according to the phase difference with the highest reliability.
Specifically, the electronic device obtains the reliability of the first phase difference and the reliability of the second phase difference corresponding to the target sub-region. The electronic device determines the phase difference with the highest reliability from the reliability of the first phase difference and the reliability of the second phase difference. And the electronic equipment acquires the defocusing value according to the phase difference with the highest reliability, controls the lens to move according to the defocusing value and performs automatic focusing.
According to the focusing method in the embodiment, the reliability of the first phase difference and the reliability of the second phase difference corresponding to the target sub-region are obtained, the phase difference with the highest reliability is determined from the reliability of the first phase difference and the reliability of the second phase difference, the electronic equipment is controlled to focus according to the phase difference with the highest reliability, the phase difference with the highest reliability can be obtained from the two reliabilities corresponding to the target sub-region, so that the focusing value with higher reliability is obtained, and the focusing accuracy is improved.
In one embodiment, the determining the number of sub-regions includes: acquiring the number of points to be detected contained in the selected region of interest; and determining the number of the sub-regions according to the number of the points to be detected. Dividing the region of interest into at least two sub-regions, including: the region of interest is divided into a target number of sub-regions.
The point to be detected refers to a point for detecting phase difference data. The point to be detected may comprise one or more pixels. The electronic device may preset a correspondence between the number of points to be detected and the number of sub-areas. Or, the electronic device may calculate the number of sub-regions according to the number of points to be detected.
Specifically, when the number of the sub-regions is smaller, that is, the number of the points to be detected in the sub-region is larger, the first phase difference and the second phase difference obtained in the sub-region are more accurate. And when the number of the sub-areas is more, namely the number of the points to be detected in the sub-areas is less, a more accurate target sub-area can be determined. Therefore, the number of sub-regions is negatively correlated to the accuracy of the phase difference, and the number of sub-regions is positively correlated to the accuracy of focusing.
The focusing method in the embodiment obtains the number of the points to be detected contained in the selected region of interest, determines the number of the sub-regions according to the number of the points to be detected, divides the region of interest into the sub-regions with the target number, can be suitable for the regions of interest with different sizes, is also suitable for electronic equipment with different specifications, and improves the accuracy of phase difference and the accuracy of focusing.
In one embodiment, the electronic device can blur other regions of the first preview image except for the region of interest.
In one embodiment, the electronic device can blur regions of the first preview image other than the target subregion. Can improve the accuracy of blurring.
In one embodiment, a focusing method includes the steps of:
and (b1) acquiring the region of interest selected in the first preview image.
And (b2) acquiring the number of the points to be detected contained in the selected region of interest.
And (b3) determining the target number of the sub-areas according to the number of the points to be detected.
And (b4) dividing the region of interest into a target number of sub-regions.
And (b5) acquiring a target brightness map corresponding to the region of interest according to the brightness values of the pixels contained in each pixel group in the image sensor.
And (b6) performing segmentation processing on the target brightness graph in the first direction to obtain a first brightness segmentation graph and a second brightness segmentation graph.
And (b7) determining a second phase difference according to the position difference of the matched pixels in the first brightness segmentation map and the second brightness segmentation map.
And (b8) performing segmentation processing on the target brightness graph in a second direction to obtain a third brightness segmentation graph and a fourth brightness segmentation graph, wherein the second direction and the first direction form a preset included angle.
And (b9) determining a first phase difference according to the position difference of the points to be detected, which are matched with each other, in the third brightness segmentation graph and the fourth brightness segmentation graph.
And (b10) acquiring the credibility of the first phase difference and the credibility of the second phase difference corresponding to each subregion.
And (b11) determining a reference phase difference which is greater than or equal to the first reliability threshold value according to the reliability of the first phase difference.
And (b12) determining the reference phase difference which is greater than or equal to the second reliability threshold value according to the reliability of the second phase difference.
And (b13) determining a reference sub-region corresponding to the reference phase difference.
And (b14) acquiring a first conversion coefficient corresponding to the first phase difference and a second conversion coefficient corresponding to the second phase difference.
And (b15) processing the first phase difference corresponding to the reference sub-region according to the first conversion coefficient to obtain a first defocus value corresponding to the reference sub-region.
And (b16) processing the second phase difference corresponding to the reference sub-region according to the second conversion coefficient to obtain a second defocus value corresponding to the reference sub-region.
And (b17) determining a target focus value corresponding to the closest distance to the electronic equipment from the first focus value and the second focus value corresponding to the reference sub-area.
And (b18) taking the sub-region corresponding to the target defocusing value as the target sub-region.
And (b19) acquiring the credibility of the first phase difference and the credibility of the second phase difference corresponding to the target sub-region.
And a step (b20) of determining the phase difference with the highest reliability from the reliability of the first phase difference and the reliability of the second phase difference.
And (b21) controlling the electronic equipment to focus according to the phase difference with the highest reliability.
The focusing method in this embodiment acquires a region of interest selected in the first preview image, divides the region of interest into at least two sub-regions, acquires a first phase difference and a second phase difference corresponding to each of the at least two sub-regions, wherein a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference form a preset included angle, according to the first phase difference and the second phase difference corresponding to each sub-area, the target sub-area corresponding to the object closest to the electronic equipment in at least two sub-areas is determined, compared with the phase difference in only one horizontal direction in the traditional technology, the accuracy of the determined target subarea can be improved through the two phase differences, focusing is carried out according to the target subarea, the focusing can be carried out according to the object with the closest distance, the focusing accuracy is improved, and the target shot object is clearer.
It should be understood that, although the steps in the flowchart of fig. 10 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 10 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
FIG. 13 is a block diagram of a focusing device according to an embodiment. As shown in fig. 13, a focusing apparatus is applied to an electronic device, where the electronic device includes an image sensor, the image sensor includes a plurality of pixel groups arranged in an array, and each pixel group includes M × N pixels arranged in an array; each pixel point corresponds to a photosensitive unit, wherein M and N are both natural numbers greater than or equal to 2; including a first acquisition module 1302, a dividing module 1304, a second acquisition module 1306, a determination module 1308, and a focusing module 1310, wherein:
a first obtaining module 1302, configured to obtain a region of interest selected in the first preview image;
a dividing module 1304 for dividing the region of interest into at least two sub-regions;
a second obtaining module 1306, configured to obtain a first phase difference and a second phase difference corresponding to each of at least two sub-regions, where a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference form a preset included angle;
a determining module 1308, configured to determine, according to the first phase difference and the second phase difference corresponding to each sub-region, a target sub-region corresponding to an object closest to the camera in the at least two sub-regions;
a focusing module 1310 for focusing according to the target sub-area.
The focusing apparatus in this embodiment acquires a region of interest selected in the first preview image, divides the region of interest into at least two sub-regions, acquires a first phase difference and a second phase difference corresponding to each of the at least two sub-regions, wherein a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference form a preset included angle, according to the first phase difference and the second phase difference corresponding to each sub-area, the target sub-area corresponding to the object closest to the electronic equipment in at least two sub-areas is determined, compared with the phase difference in only one horizontal direction in the traditional technology, the accuracy of the determined target subarea can be improved through the two phase differences, focusing is carried out according to the target subarea, the focusing can be carried out according to the object with the closest distance, the focusing can be carried out on the small object, the focusing accuracy is improved, and the target shot object is clearer.
In an embodiment, the determining module 1308 is configured to process the first phase difference corresponding to each sub-region to obtain a first defocus value, and process the second phase difference corresponding to each sub-region to obtain a second defocus value; and determining a target sub-region corresponding to the object closest to the camera in the at least two sub-regions according to the first focusing value and the second focusing value corresponding to each region.
The focusing device in this embodiment processes the first phase difference corresponding to each sub-region to obtain a first out-of-focus value, processes the second phase difference corresponding to each sub-region to obtain a second out-of-focus value, and determines, according to the first out-of-focus value and the second out-of-focus value corresponding to each region, a target sub-region corresponding to an object closest to the camera in the at least two sub-regions, and can find the target sub-region corresponding to the object closest to the camera in the at least two sub-regions according to the first out-of-focus value and the second out-of-focus value, that is, focusing can be performed according to the object closest to the camera, so that the focusing accuracy is improved, and the target shot object is clearer.
In one embodiment, the determining module 1308 is configured to obtain a reliability of the first phase difference and a reliability of the second phase difference corresponding to each sub-region; determining a reference phase difference corresponding to a first credibility threshold value or more according to the credibility of the first phase difference; determining a reference phase difference which is greater than or equal to a second reliability threshold value according to the reliability of the second phase difference; determining a reference subarea corresponding to the reference phase difference; processing the first phase difference corresponding to the reference sub-region to obtain a first defocusing value, and processing the second phase difference corresponding to the reference sub-region to obtain a second defocusing value; and determining a target sub-region corresponding to the object closest to the camera in the at least two sub-regions according to the first focusing value and the second focusing value corresponding to the reference sub-region.
The focusing device in this embodiment obtains the reliability of the first phase difference and the reliability of the second phase difference corresponding to each sub-region, determines a reference phase difference that is greater than or equal to a first reliability threshold according to the reliability of the first phase difference, determines a reference phase difference that is greater than or equal to a second reliability threshold according to the reliability of the second phase difference, determines a reference sub-region corresponding to the reference phase difference, processes the first phase difference corresponding to the reference sub-region to obtain a first defocus value, processes the second phase difference corresponding to the reference sub-region to obtain a second defocus value, determines a target sub-region corresponding to an object closest to the camera in at least two sub-regions according to the first defocus value and the second defocus value corresponding to the reference sub-region, can screen out the phase difference with higher reliability first by using the reliability, and then determines the target sub-region for focusing, the focusing accuracy is improved.
In one embodiment, the determining module 1308 is configured to obtain a first transformation coefficient corresponding to the first phase difference and a second transformation coefficient corresponding to the second phase difference; processing a first phase difference corresponding to the reference sub-region according to the first conversion coefficient to obtain a first out-of-focus value corresponding to the reference sub-region; and processing the second phase difference corresponding to the reference sub-region according to the second conversion coefficient to obtain a second defocus value corresponding to the reference sub-region.
The focusing device in this embodiment obtains a first conversion coefficient corresponding to the first phase difference and a second conversion coefficient corresponding to the second phase difference, processes the first phase difference corresponding to the reference sub-region according to the first conversion coefficient to obtain a first defocus value corresponding to the reference sub-region, and processes the second phase difference corresponding to the reference sub-region according to the second conversion coefficient to obtain a second defocus value corresponding to the reference sub-region.
In one embodiment, the determining module 1308 is configured to determine a target defocus value corresponding to a closest distance to the electronic device from the first and second defocus values corresponding to the reference sub-region; and taking the sub-region corresponding to the target out-of-focus value as a target sub-region.
In the focusing device in this embodiment, a target focus value closest to the electronic device is determined from the first focus value and the second focus value corresponding to the reference sub-region, and the sub-region corresponding to the target focus value is used as a target sub-region, so that the target focus value closest to the electronic device can be obtained in the region of interest, and thus the target sub-region where the target object is located is obtained.
In an embodiment, the determining module 1308 is configured to perform a first phase difference processing on each sub-region to obtain a first defocus value corresponding to each sub-region, and perform a second phase difference processing on each region to obtain a second defocus value corresponding to each sub-region; determining a target focus value corresponding to the closest distance to the electronic equipment from the first focus value and the second focus value corresponding to each sub-region; and taking the sub-region corresponding to the target out-of-focus value as a target sub-region.
In the focusing device in this embodiment, the first phase difference corresponding to each sub-region is processed to obtain the first defocus value corresponding to each sub-region, the second phase difference corresponding to each sub-region is processed to obtain the second defocus value corresponding to each sub-region, the target defocus value closest to the electronic device is determined from the first defocus value and the second defocus value corresponding to each sub-region, and the sub-region corresponding to the target defocus value is used as the target sub-region.
In an embodiment, the second obtaining module 1306 is configured to obtain a target luminance map corresponding to the region of interest according to luminance values of pixel points included in each pixel point group in the image sensor; carrying out segmentation processing on the target brightness graph in a first direction to obtain a first brightness segmentation graph and a second brightness segmentation graph; determining a second phase difference according to the position difference of the pixels matched with each other in the first brightness segmentation chart and the second brightness segmentation chart; carrying out segmentation processing on the target brightness graph in a second direction to obtain a third brightness segmentation graph and a fourth brightness segmentation graph, wherein the second direction is vertical to the first direction; and determining a first phase difference according to the position difference of the points to be detected, which are matched with each other in the third brightness segmentation graph and the fourth brightness segmentation graph.
The focusing device in this embodiment obtains a target luminance map corresponding to the region of interest according to the luminance values of the pixels included in each pixel group in the image sensor, performing segmentation processing on the target brightness graph in a first direction to obtain a first brightness segmentation graph and a second brightness segmentation graph, determining a second phase difference based on the position difference of the pixels matched with each other in the first and second luminance slice maps, performing segmentation processing on the target brightness graph in a second direction to obtain a third brightness segmentation graph and a fourth brightness segmentation graph, compared with the traditional phase difference acquisition mode, the area of interest of the embodiment contains abundant phase difference information, and the accuracy of the acquired phase difference can be improved.
In one embodiment, the focusing module 1310 is configured to obtain the reliability of the first phase difference and the reliability of the second phase difference corresponding to the target sub-region; determining the phase difference with the highest reliability from the reliability of the first phase difference and the reliability of the second phase difference; and controlling the electronic equipment to focus according to the phase difference with the highest reliability.
The focusing device in this embodiment obtains the reliability of the first phase difference and the reliability of the second phase difference corresponding to the target sub-region, determines the phase difference with the highest reliability from the reliability of the first phase difference and the reliability of the second phase difference, and controls the electronic device to focus according to the phase difference with the highest reliability, so that the phase difference with the highest reliability can be obtained from the two reliabilities corresponding to the target sub-region, thereby obtaining the defocus value with higher reliability and improving the focusing accuracy.
In one embodiment, the dividing module 1304 is configured to obtain the number of to-be-detected points included in the selected region of interest; determining the number of sub-regions according to the number of points to be detected; the region of interest is divided into a target number of sub-regions.
The focusing device in this embodiment obtains the number of to-be-detected points included in the selected region of interest, determines the number of sub-regions according to the number of to-be-detected points, and divides the region of interest into sub-regions of a target number, so that the focusing device is applicable to regions of interest of different sizes and electronic devices of different specifications, and improves the accuracy of phase difference and the accuracy of focusing.
The division of the modules in the focusing device is only used for illustration, and in other embodiments, the focusing device may be divided into different modules as needed to complete all or part of the functions of the focusing device.
Fig. 14 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 14, the electronic device includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor for implementing a focusing method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
The implementation of each module in the focusing apparatus provided in the embodiments of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the focusing method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform a focusing method.
Any reference to memory, storage, database, or other medium used by embodiments of the present application may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (12)

1. The focusing method is applied to electronic equipment, wherein the electronic equipment comprises an image sensor, the image sensor comprises a plurality of pixel groups arranged in an array, and each pixel group comprises M x N pixels arranged in an array; each pixel point corresponds to a photosensitive unit, wherein M and N are both natural numbers greater than or equal to 2; the method comprises the following steps:
acquiring a region of interest selected in the first preview image;
dividing the region of interest into at least two sub-regions;
acquiring a first phase difference and a second phase difference corresponding to each sub-area in the at least two sub-areas, wherein a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference form a preset included angle;
determining a target sub-region corresponding to an object closest to the electronic device in the at least two sub-regions according to the first phase difference and the second phase difference corresponding to each sub-region;
and focusing according to the target subarea.
2. The method according to claim 1, wherein the determining a target subregion corresponding to an object closest to the electronic device in the at least two subregions according to the first phase difference and the second phase difference corresponding to each subregion comprises:
processing the first phase difference corresponding to each sub-region to obtain a first defocusing value, and processing the second phase difference corresponding to each sub-region to obtain a second defocusing value;
and determining a target sub-region corresponding to an object closest to the camera in the at least two sub-regions according to the first focusing value and the second focusing value corresponding to each region.
3. The method of claim 2, wherein the processing the first phase difference corresponding to each sub-region to obtain a first defocus value and the processing the second phase difference corresponding to each sub-region to obtain a second defocus value comprises:
obtaining the credibility of the first phase difference and the credibility of the second phase difference corresponding to each subregion;
determining a reference phase difference corresponding to a first credibility threshold value or more according to the credibility of the first phase difference;
determining a reference phase difference corresponding to a second credibility threshold value or more according to the credibility of the second phase difference;
determining a reference sub-area corresponding to the reference phase difference;
processing the first phase difference corresponding to the reference sub-region to obtain a first defocusing value, and processing the second phase difference corresponding to the reference sub-region to obtain a second defocusing value;
determining a target sub-region corresponding to an object closest to the camera in the at least two sub-regions according to the first focus value and the second focus value of each region, including:
and determining a target sub-region corresponding to the object closest to the camera in the at least two sub-regions according to the first focusing value and the second focusing value corresponding to the reference sub-region.
4. The method of claim 3, wherein the processing the first phase difference corresponding to the reference sub-region to obtain a first defocus value and the processing the second phase difference corresponding to the reference sub-region to obtain a second defocus value comprises:
acquiring a first conversion coefficient corresponding to the first phase difference and a second conversion coefficient corresponding to the second phase difference;
processing a first phase difference corresponding to the reference sub-region according to the first conversion coefficient to obtain a first defocusing value corresponding to the reference sub-region;
and processing the second phase difference corresponding to the reference sub-region according to the second conversion coefficient to obtain a second defocus value corresponding to the reference sub-region.
5. The method of claim 3, wherein determining a target sub-region of the at least two sub-regions corresponding to the object closest to the camera according to the first and second defocus values corresponding to the reference sub-region comprises:
determining a target focus value corresponding to the closest distance to the electronic equipment from the first focus value and the second focus value corresponding to the reference sub-region;
and taking the sub-region corresponding to the target defocusing value as a target sub-region.
6. The method according to claim 1, wherein the determining a target subregion corresponding to an object closest to the camera in the at least two subregions according to the first phase difference and the second phase difference corresponding to each subregion comprises:
processing the first phase difference corresponding to each sub-region to obtain a first defocusing value corresponding to each sub-region, and processing the second phase difference corresponding to each region to obtain a second defocusing value corresponding to each sub-region;
determining a target focus value corresponding to the closest distance to the electronic equipment from the first focus value and the second focus value corresponding to each sub-region;
and taking the sub-region corresponding to the target defocusing value as a target sub-region.
7. The method according to any one of claims 1 to 6, wherein the acquiring the first phase difference and the second phase difference corresponding to each of the at least two sub-regions comprises:
acquiring a target brightness map corresponding to the region of interest according to the brightness values of the pixels contained in each pixel group in the image sensor;
carrying out segmentation processing on the target brightness graph in the first direction to obtain a first brightness segmentation graph and a second brightness segmentation graph;
determining the second phase difference according to the position difference of the mutually matched pixels in the first brightness segmentation graph and the second brightness segmentation graph;
carrying out segmentation processing on the target brightness graph in the second direction to obtain a third brightness segmentation graph and a fourth brightness segmentation graph;
and determining the first phase difference according to the position difference of the points to be detected, which are matched with each other, in the third brightness segmentation graph and the fourth brightness segmentation graph.
8. The method of any one of claims 1 to 6, wherein focusing according to a target sub-area comprises:
obtaining the credibility of a first phase difference and the credibility of a second phase difference corresponding to the target sub-region;
determining a phase difference with the highest reliability from the reliability of the first phase difference and the reliability of the second phase difference;
and controlling the electronic equipment to focus according to the phase difference with the highest reliability.
9. The method according to any one of claims 1 to 6, wherein the number of sub-regions is determined in a manner comprising:
acquiring the number of points to be detected contained in the selected region of interest;
determining the target number of the sub-regions according to the number of the points to be detected;
the dividing the region of interest into at least two sub-regions comprises:
dividing the region of interest into the target number of sub-regions.
10. The focusing device is applied to electronic equipment, wherein the electronic equipment comprises an image sensor, the image sensor comprises a plurality of pixel groups arranged in an array, and each pixel group comprises M x N pixels arranged in an array; each pixel point corresponds to a photosensitive unit, wherein M and N are both natural numbers greater than or equal to 2; the device comprises:
the first acquisition module is used for acquiring the region of interest selected in the first preview image;
the dividing module is used for dividing the region of interest into at least two sub-regions;
a second obtaining module, configured to obtain a first phase difference and a second phase difference corresponding to each of the at least two sub-regions, where a first direction corresponding to the first phase difference and a second direction corresponding to the second phase difference form a preset included angle;
the determining module is used for determining a target sub-region corresponding to an object which is closest to the camera in the at least two sub-regions according to the first phase difference and the second phase difference corresponding to each sub-region;
and the focusing module is used for focusing according to the target sub-area.
11. An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the focusing method as claimed in any one of claims 1 to 9.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 9.
CN201911101418.0A 2019-11-12 2019-11-12 Focusing method and device, electronic equipment and computer readable storage medium Active CN112866547B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911101418.0A CN112866547B (en) 2019-11-12 2019-11-12 Focusing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911101418.0A CN112866547B (en) 2019-11-12 2019-11-12 Focusing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112866547A true CN112866547A (en) 2021-05-28
CN112866547B CN112866547B (en) 2023-01-31

Family

ID=75984328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911101418.0A Active CN112866547B (en) 2019-11-12 2019-11-12 Focusing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112866547B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023020375A1 (en) * 2021-08-18 2023-02-23 影石创新科技股份有限公司 Automatic focusing method and apparatus, and photographing terminal and computer-readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1115048A (en) * 1997-06-20 1999-01-22 Olympus Optical Co Ltd Range finder for camera
CN103676082A (en) * 2012-09-11 2014-03-26 索尼公司 Focus detection device, imaging apparatus, and method of controlling focus detection device
CN106210527A (en) * 2016-07-29 2016-12-07 广东欧珀移动通信有限公司 The PDAF calibration steps moved based on MEMS and device
JP2017103525A (en) * 2015-11-30 2017-06-08 株式会社ニコン Tracking device and imaging apparatus
CN106921823A (en) * 2017-04-28 2017-07-04 广东欧珀移动通信有限公司 Imageing sensor, camera module and terminal device
CN106973206A (en) * 2017-04-28 2017-07-21 广东欧珀移动通信有限公司 Camera module image pickup processing method, device and terminal device
CN107566741A (en) * 2017-10-26 2018-01-09 广东欧珀移动通信有限公司 Focusing method, device, computer-readable recording medium and computer equipment
CN108337424A (en) * 2017-01-17 2018-07-27 中兴通讯股份有限公司 A kind of phase focusing method and its device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1115048A (en) * 1997-06-20 1999-01-22 Olympus Optical Co Ltd Range finder for camera
CN103676082A (en) * 2012-09-11 2014-03-26 索尼公司 Focus detection device, imaging apparatus, and method of controlling focus detection device
JP2017103525A (en) * 2015-11-30 2017-06-08 株式会社ニコン Tracking device and imaging apparatus
CN106210527A (en) * 2016-07-29 2016-12-07 广东欧珀移动通信有限公司 The PDAF calibration steps moved based on MEMS and device
CN108337424A (en) * 2017-01-17 2018-07-27 中兴通讯股份有限公司 A kind of phase focusing method and its device
CN106921823A (en) * 2017-04-28 2017-07-04 广东欧珀移动通信有限公司 Imageing sensor, camera module and terminal device
CN106973206A (en) * 2017-04-28 2017-07-21 广东欧珀移动通信有限公司 Camera module image pickup processing method, device and terminal device
CN107566741A (en) * 2017-10-26 2018-01-09 广东欧珀移动通信有限公司 Focusing method, device, computer-readable recording medium and computer equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023020375A1 (en) * 2021-08-18 2023-02-23 影石创新科技股份有限公司 Automatic focusing method and apparatus, and photographing terminal and computer-readable storage medium

Also Published As

Publication number Publication date
CN112866547B (en) 2023-01-31

Similar Documents

Publication Publication Date Title
CN109089047B (en) Method and device for controlling focusing, storage medium and electronic equipment
JP7003238B2 (en) Image processing methods, devices, and devices
KR102293443B1 (en) Image processing method and mobile terminal using dual camera
CN110536057B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112866549B (en) Image processing method and device, electronic equipment and computer readable storage medium
WO2019105206A1 (en) Method and device for image processing
CN109712192B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
WO2019085951A1 (en) Image processing method, and device
CN110349163B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109559353B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
WO2021093637A1 (en) Focusing method and apparatus, electronic device, and computer readable storage medium
CN111246100B (en) Anti-shake parameter calibration method and device and electronic equipment
CN109951641B (en) Image shooting method and device, electronic equipment and computer readable storage medium
CN109559352B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
CN112866553B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN113875219A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109697737B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
CN109584311B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
CN112866655B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112866547B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN110689007B (en) Subject recognition method and device, electronic equipment and computer-readable storage medium
CN112866554B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866546B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866545B (en) Focusing control method and device, electronic equipment and computer readable storage medium
CN112866552B (en) Focusing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant