CN114666508A - Focusing method and device and terminal computer readable storage medium - Google Patents

Focusing method and device and terminal computer readable storage medium Download PDF

Info

Publication number
CN114666508A
CN114666508A CN202210368511.3A CN202210368511A CN114666508A CN 114666508 A CN114666508 A CN 114666508A CN 202210368511 A CN202210368511 A CN 202210368511A CN 114666508 A CN114666508 A CN 114666508A
Authority
CN
China
Prior art keywords
focusing
information
sensor
confidence
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210368511.3A
Other languages
Chinese (zh)
Inventor
潘溢慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210368511.3A priority Critical patent/CN114666508A/en
Publication of CN114666508A publication Critical patent/CN114666508A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a focusing method, a focusing device, a terminal and a nonvolatile computer readable storage medium. The focusing method comprises the following steps: acquiring first depth information of a target object according to a first sensor to determine first focus information; acquiring image information of the target object according to the second sensor to determine second focus information; and focusing the camera of the terminal according to the first focusing information and the second focusing information. The focusing method, the focusing device, the terminal and the nonvolatile computer readable storage medium can ensure that the first focusing information and the second focusing information can be flexibly selected to finish accurate focusing of the camera when the camera of the terminal is focused, so as to ensure the imaging quality of the finally shot image and improve the use experience of a user.

Description

Focusing method and device and terminal computer readable storage medium
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a focusing method, a focusing apparatus, a terminal, and a non-volatile computer-readable storage medium.
Background
With the continuous development of science and technology, intelligent terminal is in order to deepen people's daily life. More and more people choose to take pictures through smartphones. In the photographing process, whether the camera focuses and whether the focusing is accurate often affects the photographing definition. At present, a camera performs focusing in a contrast focusing and phase focusing manner, however, when the quality of an image acquired by a preview interface of the camera is poor, accurate focusing is often difficult to perform, which may affect the final imaging quality, resulting in poor user experience.
Disclosure of Invention
The embodiment of the application provides a focusing method, a focusing device, a terminal and a non-volatile computer readable storage medium.
The focusing method of the embodiment of the application comprises the following steps: acquiring first depth information of a target object according to a first sensor to determine first focus information; acquiring image information of a target object according to a second sensor to determine second focus information; and focusing the camera of the terminal according to the first focusing information and the second focusing information.
The focusing device of the embodiment of the application comprises a first acquisition module, a second acquisition module and a focusing module. The first acquisition module is used for acquiring first depth information of the target object according to the first sensor so as to determine first focus information. The second acquisition module is used for acquiring image information of the target object according to the second sensor so as to determine second focus information. The focusing module is used for focusing the camera of the terminal according to the first focusing information and the second focusing information.
The terminal of the embodiments of the present application includes one or more processors, memory, and one or more programs, where the one or more programs are stored in the memory and executed by the one or more processors. The program includes instructions for executing a focusing method as follows: acquiring first depth information of a target object according to a first sensor to determine first focus information; acquiring image information of a target object according to a second sensor to determine second focus information; and focusing the camera of the terminal according to the first focusing information and the second focusing information.
The non-transitory computer-readable storage medium of the embodiments of the present application contains a computer program that, when executed by one or more processors, causes the processors to perform a focusing method of: acquiring first depth information of a target object according to a first sensor to determine first focus information; acquiring image information of the target object according to the second sensor to determine second focus information; and focusing the camera of the terminal according to the first focusing information and the second focusing information.
In the focusing method, the focusing device, the terminal and the non-volatile computer readable storage medium of the embodiment of the application, the first focusing information and the second focusing information are respectively acquired through the first sensor and the second sensor, wherein the first focusing information is acquired according to the depth information of the target object, and the second focusing information is acquired according to the image information of the target object.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart of a focusing method according to some embodiments of the present disclosure;
FIG. 2 is a block diagram of a terminal according to some embodiments of the present application;
FIG. 3 is a schematic structural diagram of a focusing device according to some embodiments of the present application;
FIG. 4 is a schematic flow chart diagram illustrating a focusing method according to some embodiments of the present disclosure;
FIG. 5 is a schematic view of a focusing method according to some embodiments of the present application;
FIGS. 6 to 10 are schematic flow charts of focusing methods according to some embodiments of the present disclosure;
FIG. 11 is a schematic view of a focusing device according to some embodiments of the present application;
FIG. 12 is a schematic diagram of a connection state of a non-volatile computer readable storage medium and a processor of some embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1 and fig. 2, a focusing method is provided in an embodiment of the present disclosure. The focusing method comprises the following steps:
01: acquiring first depth information of the target object according to the first sensor 20 to determine first focus information;
02: acquiring image information of the target object according to the second sensor 30 to determine second focus information; and
03: and focusing the camera of the terminal according to the first focusing information and the second focusing information.
Referring to fig. 2, a focusing apparatus 10 is provided in the present embodiment. The focusing device 10 includes a first capture module 11, a second capture module 12, and a focusing module 13. The focusing method according to the embodiment of the present application can be applied to the focusing apparatus 10. The first acquisition module 11, the second acquisition module 12 and the focusing module 13 are respectively configured to execute step 01, step 02 and step 03. That is, the first acquisition module 11 is configured to acquire first depth information of the target object according to the first sensor 20 to determine first focus information. The second acquisition module 12 is configured to acquire image information of the target object according to the second sensor 30 to determine second focus information. The focusing module 13 is configured to focus the camera of the terminal according to the first focusing information and the second focusing information.
Referring to fig. 3, a terminal is provided in the present embodiment. The focusing method of the embodiment of the application can be applied to the terminal. The terminal includes one or more processors 50, memory, and one or more programs. One or more programs are stored in the memory and executed by the one or more processors 50. The one or more programs include instructions for performing the methods in steps 01, 02 and 03. That is, the processor 50, when executing one or more programs, may: acquiring first depth information of the target object according to the first sensor 20 to determine first focus information; acquiring image information of the target object according to the second sensor 30 to determine second focus information; and focusing the camera of the terminal according to the first focusing information and the second focusing information.
Specifically, the terminal further includes a first sensor 20, a second sensor 30, and a housing 60. The first sensor 20 is a time of flight (TOF) sensor, and the second sensor 30 is a Phase Detection Auto Focus (PDAF) sensor. The first sensor 20, the second sensor 30 and the processor 50 are disposed in the housing 60. The housing 60 may also be used to mount functional modules of the terminal, such as an imaging device, a power supply device, and a communication device, so that the housing 60 provides protection for the functional modules, such as dust prevention, drop prevention, and water prevention.
The terminal can be VR glasses, AR glasses, smart mobile phone, panel computer, notebook computer, intelligent wrist-watch, game machine, head display equipment etc. this application uses the terminal as the smart mobile phone for example, can understand that the terminal includes and is not limited to the smart mobile phone.
More specifically, when acquiring the first depth information of the target object, the first sensor 20 determines the first focus information, that is, the first sensor 20 emits light, and calculates the depth between the target object and the first sensor 20 according to the time required for the light to return to the first sensor 20, and obtains the direction and the amount of movement of the lens of the camera, that is, the first focus information, required when the lens of the camera is focused according to the depth information. E.g. 1 mm away from the target object.
When the second sensor 30 collects the image information of the target object to determine the second focusing information, the principle is to reserve some shielding pixel points on the photosensitive element of the second sensor 30 for phase detection, so as to obtain the direction and amount of movement required by the camera lens when focusing is performed, i.e. the second focusing information, through the distance between pixels in the image and the change thereof. E.g. 1 mm towards the target object.
Next, the processor 50 may focus the camera of the terminal according to the first focus information and the second focus information.
Specifically, the processor 50 may first determine whether to select the first focus information or the second focus information by calculating a confidence of the first focus information and a confidence of the second focus information to focus the camera of the terminal.
Wherein the confidence of the first focus information is related to the intensity of the laser light emitted by the first sensor 20(TOF sensor), the confidence of the first focus information is higher when the intensity of the laser light emitted by the TOF sensor is higher. The confidence of the first focus information is also related to the first depth information measured by the first sensor 20, i.e., the distance between the target object and the first sensor 20, and the confidence of the first focus information is smaller when the first depth information is larger. As such, the processor 50 may score the confidence level of the first focus information according to the intensity of the laser light emitted by the first sensor 20 and/or the size of the first depth information to obtain the confidence level of the first focus information.
And the confidence level of the second focus information is related to the amount of image detail of the preview image acquired by the camera of the terminal at the preview interface. The confidence of the second focus information is higher when the image details are more. The image details refer to the gray level change in the preview image, and include isolated points, thin lines, abrupt changes in the screen, and the like of the preview image. As such, the processor 50 may score the confidence level of the second focus information according to image details in the image information acquired by the second sensor 30, thereby obtaining the confidence level of the second focus information.
In one embodiment, after the processor 50 obtains the confidence of the first focusing information and the confidence of the second focusing information, the processor 50 may determine which focusing information to select to focus the camera of the terminal by comparing the confidence of the first focusing information and the confidence of the second focusing information. For example, if the confidence of the first focus information is 0.8 and the confidence of the second focus information is 0.9, which indicates that the second focus information is more reliable, the processor 50 selects the second focus information to focus the camera of the terminal. For another example, if the confidence of the first focus information is 0.95 and the confidence of the second focus information is 0.9, the first focus information is more reliable, and the processor 50 selects the first focus information to focus the camera of the terminal.
In the prior art, when the PADF sensor, i.e., the second sensor 30, implements the focusing function, if a dark state and a low texture area appear in a preview image of a camera, the focusing is prone to be inaccurate. When the image appears in a dark state, the terminal is in a low-light environment, and when the image appears in a low-texture area, the detailed content in the image is less.
It can be understood that, in the above case, the processor 50 acquires the target object through the second sensor 30, so that the determined confidence of the second focusing information is low, at this time, the processor 50 selects the first focusing information to focus the camera of the terminal, thereby eliminating the influence on the focusing of the camera when the image quality of the preview image is poor. So, alright guarantee the focus of camera comparatively accurate to guarantee that final imaging quality is better, thereby improve user's use and experience.
In the focusing method, the focusing device 10 and the terminal according to the embodiment of the application, the first focusing information and the second focusing information are respectively obtained through the first sensor 20 and the second sensor 30, wherein the first focusing information is obtained according to the depth information of the target object, and the second focusing information is obtained according to the image information of the target object, so that when a camera of the terminal is focused, the first focusing information and the second focusing information can be flexibly selected to finish accurate focusing of the camera, so as to ensure the imaging quality of a finally shot image, and thus the use experience of a user is improved.
Referring to fig. 2-4, in some embodiments, step 01: acquiring first depth information of the target object according to the first sensor 20 may include the steps of:
011: acquiring depth information of different positions of the target object according to the first sensor 20; and
012: and determining first depth information according to the depth information of different positions.
In some embodiments, the first acquisition module 11 is configured to perform step 011 and step 012. That is, the first collecting module 11 is configured to collect depth information of different positions of the target object according to the first sensor 20, and determine the first depth information according to the depth information of the different positions.
In some embodiments, the one or more programs perform step 011 accordingly. Processor 50 is configured to perform steps 011 and 012. That is, the processor 50 is configured to acquire depth information of different positions of the target object according to the first sensor 20, and determine the first depth information according to the depth information of the different positions.
Specifically, when the processor 50 controls the first sensor 20 to acquire the first depth information of the target object, the processor 50 may acquire the depth information of different positions of the target object first, so that the processor 50 may determine the first depth information according to the depth information of different positions of the target object.
In one embodiment, the first depth information may be depth information corresponding to a center of the depth information at different positions of the target object. As shown in fig. 5, when the processor 50 controls the first sensor 20 to acquire the target object, the depth information of the 4 positions P1 to P4 on the target object may be acquired, and then the first depth information is the depth information of the point P0 located at the center of the 4 positions.
In yet another embodiment, the first depth information may also be an average of depth information at different positions of the target object. As shown in fig. 5, if the processor 50 acquires that the depth information of the four positions of the target objects P1-P4 are 8 mm, 6 mm, 7 mm and 11 mm, respectively, the first depth information is 8 mm.
In still another embodiment, the first depth information may also be depth information corresponding to a touch position of a user on a display screen of the terminal. It can be understood that, when a user touches a certain position on the display screen during the photographing process by using the terminal, it is indicated that the user needs to take the position as a focus to focus the camera. Therefore, the first depth information may also be determined according to depth information corresponding to a touch position of the user on the display screen of the terminal. If the depth information of the touch position of the user on the display screen is 8 mm, the first depth information is 8 mm.
Therefore, after the user determines the first depth information, the user can perform accurate focusing on the camera of the terminal, or perform focusing according to the center position of the target object, or perform focusing according to the position that the user wants to focus on, so as to perform focusing in different areas according to the three embodiments.
Referring to fig. 2, fig. 3 and fig. 6, the focusing method according to the embodiment of the present application may further include the steps of:
04: and when the difference value of the first depth information of two continuous frames is greater than a preset threshold value, sending the first depth information to the terminal.
In some embodiments, the focusing device 10 further includes a sending module 14, and the sending module 14 is configured to execute step 04. That is, the sending module 14 is configured to send the first depth information to the terminal when a difference between the first depth information of two consecutive frames is greater than a preset threshold.
In certain embodiments, one or more programs are used to perform step 04. Processor 50 is configured to perform step 04. That is, the processor 50 is configured to send the first depth information to the terminal when a difference value of the first depth information of two consecutive frames is greater than a preset threshold.
Specifically, the processor 50 may further control the first sensor 20 to acquire the first depth information of the target object at a predetermined frame rate. If 5 frames of images of the target object are acquired every second, 5 pieces of first depth information can be acquired correspondingly.
Next, the processor 50 may determine whether a difference between the first depth information of two consecutive frames is greater than a preset threshold, and send the first depth information to the terminal when the difference is greater than the preset threshold. For example, the first depth information corresponding to the target object of the first frame is 8 mm, the second depth information corresponding to the target object of the second frame is 10 mm, and the preset threshold is 5 mm, at this time, the difference value between the first depth information of two consecutive frames is smaller than the preset threshold, the processor 50 determines that the user only slightly shakes during the photographing process, instead of replacing the photographed target object, and the processor 50 does not update the first depth information in the current terminal. If the first depth information corresponding to the target object in the first frame is 8 mm and the second depth information corresponding to the target object in the second frame is 15 mm, it indicates that the user has replaced the target object to be photographed, and the processor 50 needs to update the first depth information to the terminal to update the first focusing information, so that the camera of the terminal can focus when the new target object is photographed.
Referring to fig. 2, 3 and 7, in some embodiments, step 03: according to the first focusing information and the second focusing information, focusing a camera of the terminal can be performed, and the method can comprise the following steps:
031: acquiring a first confidence degree of the first focusing information and a second confidence degree of the second focusing information; and
032: when the first confidence coefficient is larger than the second confidence coefficient, selecting first focusing information to focus a camera of the terminal; or the like, or, alternatively,
033: and when the second confidence coefficient is greater than the first confidence coefficient, selecting second focusing information to focus the camera of the terminal.
In some embodiments, the focusing module 13 is configured to perform step 031, step 032, and step 033. That is, the focusing module 13 is configured to obtain a first confidence degree of the first focusing information and a second confidence degree of the second focusing information; when the first confidence coefficient is larger than the second confidence coefficient, selecting first focusing information to focus a camera of the terminal; or when the second confidence coefficient is greater than the first confidence coefficient, selecting second focusing information to focus the camera of the terminal.
In certain embodiments, one or more programs are used to perform steps 031, 032, and 033. Processor 50 is configured to perform step 031, step 032, and step 033. That is, the processor 50 is configured to obtain a first confidence level of the first focus information and a second confidence level of the second focus information; when the first confidence coefficient is larger than the second confidence coefficient, selecting first focusing information to focus a camera of the terminal; or when the second confidence coefficient is greater than the first confidence coefficient, selecting second focusing information to focus the camera of the terminal.
Specifically, before the processor 50 focuses the camera of the terminal according to the first focusing information and the second focusing information, the processor 50 may first obtain a first confidence level of the first focusing information and a second confidence level of the second focusing information, and after the first confidence level and the second confidence level are determined, determine whether to select the first focusing information or the second focusing information to focus the camera of the terminal.
From the above, the first confidence level is related to the intensity of the laser light emitted by the first sensor 20, and the confidence level of the first focus information is higher when the TOF sensor emits the greater the intensity of the laser light. The confidence of the first focus information is also related to the first depth information measured by the first sensor 20, i.e., the distance between the target object and the first sensor 20, and the confidence of the first focus information is smaller when the first depth information is larger.
The second confidence level is related to the amount of image details of the preview image acquired by the camera of the terminal at the preview interface. The confidence of the second focus information is higher when the image details are more. The image details refer to the gray level change in the preview image, and include isolated points, thin lines, abrupt changes in the screen, and the like of the preview image.
Thus, after the processor 50 determines the first confidence level and the second confidence level, the processor 50 can determine which focusing information to select for focusing the camera of the terminal. For example, when the first confidence level is 0.95 and the second confidence level is 0.9, that is, the first confidence level is greater than the second confidence level, the processor 50 may select the first focusing information to focus the camera of the terminal. For another example, when the first confidence level is 0.8 and the second confidence level is 0.9, that is, the second confidence level is greater than the first confidence level, the processor 50 may select the second focusing information to focus the camera of the terminal.
In summary, the processor 50 selects the first focusing information and the second focusing information flexibly to realize accurate focusing of the camera of the terminal, thereby ensuring the user experience.
Referring to fig. 2, 3 and 8, in some embodiments, step 03: according to the first focusing information and the second focusing information, focusing a camera of the terminal can be performed, and the method can comprise the following steps:
034: acquiring a first confidence degree of the first focusing information and a second confidence degree of the second focusing information;
035: determining a first weight corresponding to the first confidence coefficient and a second weight corresponding to the second confidence coefficient;
036: determining target focusing information according to the first weight, the first focusing information, the second weight and the second focusing information; and
037: and focusing the camera of the terminal according to the target focusing information.
In certain embodiments, focusing module 13 is configured to perform steps 034, 035, 036, and 037. That is, the focusing module 13 is configured to obtain a first confidence level of the first focusing information and a second confidence level of the second focusing information; determining a first weight corresponding to the first confidence coefficient and a second weight corresponding to the second confidence coefficient; determining target focusing information according to the first weight, the first focusing information, the second weight and the second focusing information; and focusing the camera of the terminal according to the target focusing information.
In certain embodiments, one or more programs are used to perform steps 034, 035, 036, and 037. Processor 50 is configured to perform steps 034, 035, 036 and 037. That is, the processor 50 is configured to obtain a first confidence level of the first focus information and a second confidence level of the second focus information; determining a first weight corresponding to the first confidence coefficient and a second weight corresponding to the second confidence coefficient; determining target focusing information according to the first weight, the first focusing information, the second weight and the second focusing information; and focusing the camera of the terminal according to the target focusing information.
Specifically, after the processor 50 obtains the first confidence of the first focus information and the second confidence of the second focus information, the processor 50 may further determine a first weight corresponding to the first confidence and a second weight corresponding to the second confidence. For example, the first weight may be the first confidence/(sum of the first confidence and the second confidence), and the second weight may be the second confidence/(sum of the first confidence and the second confidence).
In this way, the processor 50 may determine the target focusing information according to the first weight, the first focusing information, the second weight, and the second focusing information.
For example, if the first confidence is 0.8 and the second confidence is 0.9, the first weight is 0.47 and the second weight is 0.53, and if the first focusing information is to control the lens of the camera to move 1 mm in the direction away from the target object, and the second focusing information is to control the lens of the camera to move 2 mm in the direction away from the target object, the target focusing information is 0.47 × 1+0.53 × 2 to 1.53 mm, that is, the target focusing information is to control the lens of the camera to move 1.53 mm in the direction away from the target object.
Finally, the processor 50 may control the lens of the camera of the terminal to move according to the target focusing information, thereby completing the focusing of the camera of the terminal. Since the target focusing information is respectively fused with the first focusing information, the second focusing information, and the first weight and the second weight obtained according to the first confidence of the first focusing information and the second confidence of the second focusing information, it can be ensured that the processor 50 can obtain a more accurate focusing result when focusing the camera of the terminal through the target focusing information, thereby ensuring the user experience.
Referring to fig. 2, 3 and 9, in some embodiments, step 031 or step 034: acquiring a first confidence level of the first focus information and a second confidence level of the second focus information may include the steps of:
0311: determining a first confidence degree according to the first depth information of the target object or the intensity of the light emitted by the first sensor 20; and
0312: and determining definition according to the image information of the target object, and determining a second confidence coefficient according to the definition.
In some embodiments, the focusing module 13 is configured to perform steps 0311 and 0312. That is, the focusing module 13 is configured to determine a first confidence degree according to the first depth information of the target object or the intensity of the light emitted by the first sensor 20; and determining definition according to the image information of the target object, and determining a second confidence coefficient according to the definition.
In certain embodiments, one or more programs are used to perform steps 0311 and 0312. Processor 50 is configured to perform steps 0311 and 0312. That is, the processor 50 is configured to determine a first confidence level according to the first depth information of the target object or the intensity of the light emitted from the first sensor 20; and determining definition according to the image information of the target object, and determining a second confidence coefficient according to the definition.
Specifically, as can be seen from the above description, when the processor 50 obtains the first confidence level of the first focus information and the second confidence level of the second focus information, the first confidence level is related to the intensity of the laser emitted by the first sensor 20 and the distance between the target object and the first sensor 20. The distance between the target object and the first sensor 20 is the first depth information obtained by the first sensor 20 collecting the target object, so that the processor 50 can determine the first confidence degree according to the first depth information or the intensity of the light emitted by the first sensor 20.
In some embodiments, the processor 50 may also determine the first confidence level based on the first depth information and the intensity of the light emitted by the first sensor 20. For example, different weights are preset for the intensity of the light emitted from the first sensor 20 and the first depth information, and the first confidence is determined by means of weighted average.
And the second confidence level is related to the amount of image details of the preview image acquired by the camera of the terminal at the preview interface. The image details refer to the gray level change in the preview image, and include isolated points, thin lines, abrupt changes in the screen, and the like of the preview image. Thus, the greater the image sharpness, the more image detail in the preview image can be characterized. Therefore, the processor 50 may first determine the sharpness according to the image information of the target object, and thus determine the second confidence according to the sharpness. As can be seen from the above, the second confidence is higher as the image detail is more, and thus, the second confidence is higher as the sharpness is higher.
Referring to fig. 2, 10 and 11, the focusing method according to the embodiment of the present application further includes the steps of:
05: controlling the third sensor 40 to acquire a plurality of frame images of the target object;
06: comparing the contrast of a plurality of regions in the image to select the region with the maximum contrast;
07: determining third focusing information according to the area with the maximum contrast;
08: acquiring a first confidence degree of the first focusing information and a second confidence degree of the second focusing information; and
09: and when the first confidence coefficient and the second confidence coefficient are both smaller than the preset confidence coefficient, focusing the camera of the terminal according to the third focusing information.
In some embodiments, focusing device 10 further includes a control module 15, a comparison module 16, a determination module 17, and an acquisition module 18. The control module 15 is configured to execute step 05, the comparison module 16 is configured to execute step 06, the determination module 17 is configured to execute step 07, the obtaining module 18 is configured to execute step 08, and the focusing module 13 is configured to execute step 09. That is, the control module 15 is configured to control the third sensor 40 to acquire a plurality of frame images of the target object. The comparison module 16 is configured to compare the contrast of a plurality of regions in the image to select the region with the highest contrast. The determining module 17 is configured to determine the third focusing information according to the region with the largest contrast. The obtaining module 18 is configured to obtain a first confidence level of the first focus information and a second confidence level of the second focus information. The focusing module 13 is configured to focus the camera of the terminal according to the third focusing information when the first confidence level and the second confidence level are both smaller than a preset confidence level.
In certain embodiments, one or more programs are used to perform steps 05, 06, 07, 08, and 09. Processor 50 is configured to perform step 05, step 06, step 07, step 08, and step 09. That is, the processor 50 is configured to acquire a plurality of frames of images of the target object according to control of the third sensor 40; comparing the contrast of a plurality of regions in the image to select the region with the maximum contrast; determining third focusing information according to the area with the maximum contrast; acquiring a first confidence degree of the first focusing information and a second confidence degree of the second focusing information; and focusing the camera of the terminal according to the third focusing information when the first confidence coefficient and the second confidence coefficient are both smaller than the preset confidence coefficient.
Specifically, please refer to fig. 2, the terminal may further include a third sensor 40. The third sensor 40 is a Focus on Contrast (CAF) sensor.
More specifically, the processor 50 may first pass through the third sensor 40 to acquire an image of the target object. Wherein the image may be an image of the terminal at the preview interface.
Next, the processor 50 divides the image into a plurality of regions, and compares the contrast of each region by region, thereby selecting a region having the highest contrast.
It should be noted that, in the third sensor 40(CAF sensor), when the processor 50 selects the area with the largest contrast, it indicates that the lens of the camera moves to the focal position. Thus, when the processor 50 determines the area with the largest contrast, the third focusing information may be determined. The third focusing information may include a moving direction and a moving amount required for the current lens to move to the focus position. Due to the characteristics of the third sensor 40, the third focusing information obtained by the processor 50 is more accurate than the first focusing information and the second focusing information, but the difference is that it takes a longer time to focus the camera of the terminal through the third sensor 40.
Thus, the processor 50 determines the first focusing information and the second focusing information respectively, so as to obtain a first confidence of the first focusing information and a second confidence of the second focusing information, determine whether the first confidence and the second confidence are less than a preset confidence, control the third sensor 40 to complete the above work when the first confidence and the second confidence are less than the preset confidence, provide the third focusing information, and complete the focusing of the camera of the terminal.
When the first confidence degree and the second confidence degree are greater than the preset confidence degree, it indicates that the first focusing information and the second focusing information are more accurate, and the processor 50 completes focusing of the camera of the terminal only by selecting one of the first focusing information and the second focusing information. The selection of the first focusing information and the second focusing information is described in the above embodiments, and is not repeated herein.
Therefore, the camera focusing device can ensure that when the first focusing information and the second focusing information finish accurate focusing of the camera, accurate focusing of the camera can be realized through the third focusing information, so that focusing efficiency and focusing precision are considered, focusing of the camera is finished, and use experience of a user is ensured.
Referring to fig. 12, the present application further provides a non-volatile computer-readable storage medium 300 containing a computer program 301. The computer program 301, when executed by the one or more processors 50, causes the one or more processors 50 to perform the focusing method of any of the embodiments described above.
For example, the computer program 301, when executed by the one or more processors 50, causes the processor 50 to perform the following focusing method:
01: acquiring first depth information of the target object according to the first sensor 20 to determine first focus information;
02: acquiring image information of the target object according to the second sensor 30 to determine second focus information; and
03: and focusing the camera of the terminal according to the first focusing information and the second focusing information.
As another example, the computer program 301, when executed by the one or more processors 50, causes the processor 50 to perform the following focusing method:
011: acquiring depth information of different positions of the target object according to the first sensor 20; and
012: and determining first depth information according to the depth information of different positions.
Also for example, the computer program 301, when executed by the one or more processors 50, causes the processor 50 to perform the following focusing method:
031: acquiring a first confidence degree of the first focusing information and a second confidence degree of the second focusing information; and
032: when the first confidence coefficient is larger than the second confidence coefficient, selecting first focusing information to focus a camera of the terminal; or the like, or, alternatively,
033: and when the second confidence coefficient is greater than the first confidence coefficient, selecting second focusing information to focus the camera of the terminal.
In the description herein, references to the description of "certain embodiments," "in one example," "exemplary," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Moreover, various embodiments or examples and features of various embodiments or examples described in this specification can be combined and combined by one skilled in the art without being mutually inconsistent.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (12)

1. A focusing method, comprising:
acquiring first depth information of a target object according to a first sensor to determine first focus information;
acquiring image information of the target object according to the second sensor to determine second focus information; and
and focusing the camera of the terminal according to the first focusing information and the second focusing information.
2. The focusing method of claim 1, wherein the first sensor is a time-of-flight sensor and the second sensor is a phase focusing sensor.
3. The focusing method according to claim 2, wherein the acquiring first depth information of the target object according to the first sensor comprises:
acquiring depth information of different positions of the target object according to the first sensor;
and determining the first depth information according to the depth information of different positions.
4. The focusing method of claim 3, wherein the first depth information is depth information corresponding to a center of the depth information of the different positions, or an average value of the depth information of the different positions, or depth information corresponding to a touch position of a user on a display screen of the terminal.
5. The focusing method of claim 1, wherein the first sensor collects the first depth information at a predetermined frame rate, the focusing method further comprising:
and when the difference value of the first depth information of two continuous frames is greater than a preset threshold value, sending the first depth information to the terminal.
6. The focusing method according to claim 1, wherein the focusing a camera of a terminal according to the first focusing information and the second focusing information comprises:
acquiring a first confidence degree of the first focusing information and a second confidence degree of the second focusing information;
when the first confidence coefficient is larger than the second confidence coefficient, selecting the first focusing information to focus a camera of the terminal;
or when the second confidence coefficient is greater than the first confidence coefficient, selecting the second focusing information to focus the camera of the terminal.
7. The focusing method according to claim 1, wherein the focusing a camera of a terminal according to the first focusing information and the second focusing information comprises:
acquiring a first confidence degree of the first focusing information and a second confidence degree of the second focusing information;
determining a first weight corresponding to the first confidence coefficient and a second weight corresponding to the second confidence coefficient;
determining target focusing information according to the first weight, the first focusing information, the second weight and the second focusing information;
and focusing the camera of the terminal according to the target focusing information.
8. The focusing method according to claim 6 or 7, wherein the obtaining a first confidence level of the first focusing information and a second confidence level of the second focusing information comprises;
determining the first confidence degree according to the first depth information of the target object or the intensity of the light emitted by the first sensor; and
and determining definition according to the image information of the target object, and determining the second confidence coefficient according to the definition.
9. The focusing method of claim 1, wherein the terminal further comprises a third sensor, and the focusing method further comprises:
controlling the third sensor to acquire an image of the target object;
comparing the contrast of a plurality of areas in the image to select the area with the maximum contrast;
determining third focusing information according to the area with the maximum contrast;
acquiring a first confidence degree of the first focusing information and a second confidence degree of the second focusing information;
and focusing the camera of the terminal according to the third focusing information when the first confidence coefficient and the second confidence coefficient are both smaller than a preset confidence coefficient.
10. A focusing device, comprising:
a first acquisition module to acquire first depth information of a target object from a first sensor to determine first focus information;
a second acquisition module for acquiring image information of the target object according to a second sensor to determine second focus information;
and the focusing module is used for focusing the camera of the terminal according to the first focusing information and the second focusing information.
11. A terminal, comprising:
one or more processors, memory; and
one or more programs, wherein one or more of the programs are stored in the memory and executed by one or more of the processors, the programs comprising instructions for performing the focusing method of any one of claims 1 to 9.
12. A non-transitory computer-readable storage medium storing a computer program which, when executed by one or more processors, implements the focusing method of any one of claims 1 to 9.
CN202210368511.3A 2022-04-06 2022-04-06 Focusing method and device and terminal computer readable storage medium Pending CN114666508A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210368511.3A CN114666508A (en) 2022-04-06 2022-04-06 Focusing method and device and terminal computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210368511.3A CN114666508A (en) 2022-04-06 2022-04-06 Focusing method and device and terminal computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114666508A true CN114666508A (en) 2022-06-24

Family

ID=82035787

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210368511.3A Pending CN114666508A (en) 2022-04-06 2022-04-06 Focusing method and device and terminal computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114666508A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108028887A (en) * 2016-03-23 2018-05-11 华为技术有限公司 Focusing method of taking pictures, device and the equipment of a kind of terminal
CN111491105A (en) * 2020-04-24 2020-08-04 Oppo广东移动通信有限公司 Focusing method of mobile terminal, mobile terminal and computer storage medium
CN111654637A (en) * 2020-07-14 2020-09-11 RealMe重庆移动通信有限公司 Focusing method, focusing device and terminal equipment
WO2020259179A1 (en) * 2019-06-28 2020-12-30 Oppo广东移动通信有限公司 Focusing method, electronic device, and computer readable storage medium
CN112752026A (en) * 2020-12-31 2021-05-04 深圳市汇顶科技股份有限公司 Automatic focusing method, automatic focusing device, electronic equipment and computer readable storage medium
CN112822412A (en) * 2020-12-28 2021-05-18 维沃移动通信有限公司 Exposure method and electronic apparatus
CN113141468A (en) * 2021-05-24 2021-07-20 维沃移动通信(杭州)有限公司 Focusing method and device and electronic equipment
CN114125268A (en) * 2021-10-28 2022-03-01 维沃移动通信有限公司 Focusing method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108028887A (en) * 2016-03-23 2018-05-11 华为技术有限公司 Focusing method of taking pictures, device and the equipment of a kind of terminal
WO2020259179A1 (en) * 2019-06-28 2020-12-30 Oppo广东移动通信有限公司 Focusing method, electronic device, and computer readable storage medium
CN111491105A (en) * 2020-04-24 2020-08-04 Oppo广东移动通信有限公司 Focusing method of mobile terminal, mobile terminal and computer storage medium
CN111654637A (en) * 2020-07-14 2020-09-11 RealMe重庆移动通信有限公司 Focusing method, focusing device and terminal equipment
CN112822412A (en) * 2020-12-28 2021-05-18 维沃移动通信有限公司 Exposure method and electronic apparatus
CN112752026A (en) * 2020-12-31 2021-05-04 深圳市汇顶科技股份有限公司 Automatic focusing method, automatic focusing device, electronic equipment and computer readable storage medium
CN113141468A (en) * 2021-05-24 2021-07-20 维沃移动通信(杭州)有限公司 Focusing method and device and electronic equipment
CN114125268A (en) * 2021-10-28 2022-03-01 维沃移动通信有限公司 Focusing method and device

Similar Documents

Publication Publication Date Title
US10997696B2 (en) Image processing method, apparatus and device
CN107087107B (en) Image processing apparatus and method based on dual camera
JP7311418B2 (en) Focusing method, terminal and computer readable storage medium
CN105007420B (en) A kind of focusing method and mobile terminal
KR100890949B1 (en) Electronic device and method in an electronic device for processing image data
CN108076278B (en) Automatic focusing method and device and electronic equipment
JP4989385B2 (en) Imaging apparatus, control method thereof, and program
US8379138B2 (en) Imaging apparatus, imaging apparatus control method, and computer program
US8648961B2 (en) Image capturing apparatus and image capturing method
EP2123025A2 (en) Operating dual lens cameras to augment images
KR20120119920A (en) Method and apparatus with depth map generation
CN107133982B (en) Depth map construction method and device, shooting equipment and terminal equipment
US8294785B2 (en) Method for adjusting photosensitiveness of digital camera
KR20120022512A (en) Electronic camera, image processing apparatus, and image processing method
US20200221005A1 (en) Method and device for tracking photographing
JPWO2012165088A1 (en) Imaging apparatus and program
CN104601876A (en) Method and device for detecting passerby
JP2024055966A (en) Subject tracking device, control method thereof, program, and storage medium
CN106060404A (en) Photographing mode selection method and terminal
CN113923347A (en) Automatic focusing method and device, shooting terminal and computer readable storage medium
CN113747067A (en) Photographing method and device, electronic equipment and storage medium
CN113163114A (en) Image focusing method, device, equipment and medium
JP6645711B2 (en) Image processing apparatus, image processing method, and program
KR20170029386A (en) Method and device for setting a focus of a camera
CN114666508A (en) Focusing method and device and terminal computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination