CN109120839B - Focusing method of double cameras and terminal - Google Patents
Focusing method of double cameras and terminal Download PDFInfo
- Publication number
- CN109120839B CN109120839B CN201710492055.2A CN201710492055A CN109120839B CN 109120839 B CN109120839 B CN 109120839B CN 201710492055 A CN201710492055 A CN 201710492055A CN 109120839 B CN109120839 B CN 109120839B
- Authority
- CN
- China
- Prior art keywords
- fov
- camera
- selectable
- focusing
- confidence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the application discloses a focusing method and a terminal of double cameras, which are used for improving the focusing efficiency of the cameras. The method in the embodiment of the application comprises the following steps: acquiring first focusing information of a first camera and second focusing information of a second camera; determining a first target FOV of the first camera according to the first focus information, and determining a first selectable FOV of the second camera according to the second focus information; determining a second optional FOV of the second camera according to the FOV corresponding relation of the first camera and the second camera and the first target FOV; determining a second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to a first confidence coefficient in the first focus information and a second confidence coefficient in the second focus information, controlling the first camera to move to a position corresponding to the first target FOV, and controlling the second camera to move to a position corresponding to the second target FOV.
Description
Technical Field
The application relates to the field of image processing, in particular to a focusing method and a terminal for two cameras.
Background
With the rapid development of intelligent terminals such as mobile phones or tablet computers, mobile terminals provide more and more help for users in the aspects of life, work, entertainment and the like. Among them, the shooting function has become an indispensable part of the mobile terminal, and as the quality of life of people improves, people have higher and higher requirements for the shooting effect, and therefore, the focusing effect in the shooting process is also more and more important.
In the prior art, a mobile phone with two cameras (a camera A and a camera B) has independent focusing processes, and no obvious interaction process exists, because the camera A and the camera B do not interact with each other, the camera A is very easy to be fast and stable, and the camera B has a large bellows pulling phenomenon (namely, the phenomenon shows that the camera searches back and forth, and a user sees obvious clear and fuzzy change on an interface), so that the imaging display of the camera B is very bad, and the effect problem of a picture can be caused if data which is not successfully focused is used; in addition to the above problems, there are also problems in the prior art that the camera a completes focusing first, and the camera B slowly searches for focusing and completes focusing after a long time.
In the two focusing methods, the user can see obvious clear and fuzzy switching of focusing or see slow focusing processes, and the focusing processes can cause the focusing efficiency of the camera to be low.
Disclosure of Invention
The embodiment of the application provides a focusing method and a terminal of double cameras, which are used for improving the focusing efficiency of the cameras.
A first aspect of an embodiment of the present application provides a method for focusing with two cameras, where the two cameras include a first camera and a second camera, and specifically include:
when a camera in the terminal needs to be focused, the terminal can acquire first focusing information of a first acquisition camera and second focusing information of a second camera; then determining a first target FOV of the first camera according to the first focusing information, determining a first selectable FOV of the second camera according to the second focusing information, and then determining a second selectable FOV of the second camera according to the FOV corresponding relation between the first camera and the second camera and the first target FOV; determining a second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to a first confidence coefficient in the first focusing information and a second confidence coefficient in the second focusing information, wherein the first confidence coefficient is used for representing the reliability degree of the first focusing information, and the second confidence coefficient is used for representing the reliability degree of the second focusing information; and finally, controlling the first camera to move to the position corresponding to the first target FOV and controlling the second camera to move to the position corresponding to the second target FOV. The terminal in the embodiment of the application realizes the linkage adjustment of the two cameras according to the corresponding relation of the first camera and the second camera, so that the focusing efficiency of the cameras can be improved, and the user experience is improved.
In one possible design, in a first implementation manner of the first aspect of the embodiment of the present application, the determining the second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to the first confidence in the first focus information and the second confidence in the second focus information includes: and when the terminal judges that the difference value between the first selectable FOV and the second selectable FOV is larger than a preset value, determining the second target FOV from the first selectable FOV and the second selectable FOV according to the first confidence coefficient and the second confidence coefficient. The embodiment of the application adds the precondition that the second target FOV is determined from the first selectable FOV and the second selectable FOV according to the first confidence degree and the second confidence degree, and enriches the specific implementation modes of the embodiment.
In one possible design, in a second implementation manner of the first aspect of the embodiment of the present application, the method further includes: when the terminal judges that the difference value between the first selectable FOV and the second selectable FOV is not greater than the preset value, the first selectable FOV can be determined as the second target FOV of the second camera. The embodiment of the present application appends the result when the difference between the first selectable FOV and the second selectable FOV is not greater than the preset value, increasing the integrity of the embodiment of the present application.
In one possible design, in a third implementation manner of the first aspect of the embodiment of the present application, the determining the second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to the first confidence in the first focus information and the second confidence in the second focus information includes: when the terminal judges that the moving direction indicated by the first selectable FOV is opposite to the moving direction indicated by the second selectable FOV, the second target FOV is determined from the first selectable FOV and the second selectable FOV according to the first confidence coefficient and the second confidence coefficient. The embodiment of the present application adds another precondition for determining the second target FOV from the first selectable FOV and the second selectable FOV according to the first confidence degree and the second confidence degree, which enriches the specific implementation manners of the embodiment.
In a possible design, in a fourth implementation manner of the first aspect of the embodiment of the present application, the determining a first target FOV of the first camera according to the first focus information and determining a first selectable FOV of the second camera according to the second focus information includes: firstly, a terminal acquires a first current FOV and a first FOV difference value of a first camera from first focusing information, and acquires a second current FOV and a second FOV difference value from second focusing information; then, a first target FOV of the first camera is determined according to the first current FOV and the first FOV difference value, and a first selectable FOV of the second camera is determined according to the second current FOV and the second FOV difference value. The embodiment of the application details how to determine the first target FOV of the first camera according to the first focusing information and determine the first optional FOV of the second camera according to the second focusing information, thereby enriching the concrete implementation means of the embodiment.
In one possible design, in a fifth implementation manner of the first aspect of the embodiment of the present application, the determining a second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to the first confidence in the first focus information and the second confidence in the second focus information includes: selecting an optional FOV corresponding to the greater of the first confidence degree and the second confidence degree as the second target FOV if the focusing manner of the first focus information and the focusing manner of the second focus information are the same; the embodiment of the present application describes how to determine the second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to the first confidence degree in the first focus information and the second confidence degree in the second focus information, and enriches the specific implementation manners of the embodiment of the present application.
In one possible design, in a sixth implementation manner of the first aspect of the embodiment of the present application, the determining the second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to the first confidence in the first focus information and the second confidence in the second focus information includes: and if the focusing mode of the first focusing information is different from that of the second focusing information, selecting an optional FOV corresponding to the confidence coefficient with higher priority in the first confidence coefficient and the second confidence coefficient as the second target FOV according to the priority ranking of the focusing modes. The embodiment of the present application describes how to determine the second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to the first confidence degree in the first focus information and the second confidence degree in the second focus information, and enriches the specific implementation manners of the embodiment of the present application.
In a possible design, in a seventh implementation manner of the first aspect of the embodiment of the present application, the FOV correspondence is a FOV correspondence table between the first camera and the second camera. The embodiment of the application specifically describes the existence mode of the FOV corresponding relation, and adds the specific implementation modes of the embodiment.
In one possible design, in an eighth implementation manner of the first aspect of the embodiment of the present application, at least one of the first focus information and the second focus information is obtained by any one of: a phase difference focusing system, a laser focusing system, or a contrast focusing system. The embodiment of the present application illustrates a focusing manner of the first focusing information and the second focusing information, which increases operability of the embodiment of the present application.
A second aspect of the embodiments of the present application provides a terminal for realizing focusing of two cameras, where the two cameras include a first camera and a second camera, including:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring first focusing information of a first camera and second focusing information of a second camera;
a first determining unit, configured to determine a first target field of view FOV of the first camera according to the first focus information, and determine a first selectable FOV of the second camera according to the second focus information;
a second determining unit, configured to determine a second selectable FOV of the second camera according to the FOV corresponding relationship between the first camera and the second camera and the first target FOV;
a third determining unit, configured to determine a second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to a first confidence in the first focus information and a second confidence in the second focus information, where the first confidence is used to characterize a reliability degree of the first focus information, and the second confidence is used to characterize a reliability degree of the second focus information;
and the control unit is used for controlling the first camera to move to the position corresponding to the first target FOV and controlling the second camera to move to the position corresponding to the second target FOV.
The terminal in the embodiment of the application can realize the linkage adjustment of two cameras according to the corresponding relation of the first camera and the second camera, can improve the focusing efficiency of the cameras, and therefore improves the user experience.
In one possible design, in a first implementation manner of the second aspect of the embodiment of the present application, the third determining unit includes:
a first determining subunit configured to determine, when a difference between the first selectable FOV and the second selectable FOV is greater than a preset value, the second target FOV from the first selectable FOV and the second selectable FOV according to the first confidence degree and the second confidence degree.
The embodiment of the application adds the precondition that the second target FOV is determined from the first selectable FOV and the second selectable FOV according to the first confidence degree and the second confidence degree, and enriches the specific implementation modes of the embodiment.
In a possible design, in a second implementation manner of the second aspect of the embodiment of the present application, the terminal further includes:
a fourth determination unit to determine the first selectable FOV as a second target FOV of the second camera when the difference between the first selectable FOV and the second selectable FOV is not greater than the preset value.
The embodiment of the present application appends the result when the difference between the first selectable FOV and the second selectable FOV is not greater than the preset value, increasing the integrity of the embodiment of the present application.
In a possible design, in a third implementation manner of the second aspect of the embodiment of the present application, the third determining unit includes:
a second determining subunit configured to determine the second target FOV from the first selectable FOV and the second selectable FOV according to the first confidence degree and the second confidence degree when the movement direction indicated by the first selectable FOV is opposite to the movement direction indicated by the second selectable FOV.
The embodiment of the present application adds another precondition for determining the second target FOV from the first selectable FOV and the second selectable FOV according to the first confidence degree and the second confidence degree, which enriches the specific implementation manners of the embodiment.
In a possible design, in a fourth implementation manner of the second aspect of the embodiment of the present application, the first determining unit includes:
a first acquiring subunit, configured to acquire a first current FOV and a first FOV difference of the first camera from the first focus information, and acquire a second current FOV and a second FOV difference from the second focus information;
a third determining subunit, configured to determine a first target FOV of the first camera according to the first current FOV and the first FOV difference, and determine a first selectable FOV of the second camera according to the second current FOV and the second FOV difference.
The embodiment of the application details how to determine the first target FOV of the first camera according to the first focusing information and determine the first optional FOV of the second camera according to the second focusing information, thereby enriching the concrete implementation means of the embodiment.
In a possible design, in a fifth implementation manner of the second aspect of the embodiment of the present application, the third determining unit includes:
a first selection subunit, configured to select, when a focusing manner of the first focus information is the same as a focusing manner of the second focus information, an optional FOV corresponding to the greater of the first confidence degree and the second confidence degree as the second target FOV;
the embodiment of the present application describes how to determine the second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to the first confidence degree in the first focus information and the second confidence degree in the second focus information, and enriches the specific implementation manners of the embodiment of the present application.
In a possible design, in a sixth implementation manner of the second aspect of the embodiment of the present application, the third determining unit includes:
and a second selecting subunit, configured to select, according to a priority ranking of focusing manners, an optional FOV corresponding to a confidence coefficient with a higher priority from the first confidence coefficient and the second confidence coefficient as the second target FOV when the focusing manner of the first focusing information is different from the focusing manner of the second focusing information.
The embodiment of the present application describes how to determine the second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to the first confidence degree in the first focus information and the second confidence degree in the second focus information, and enriches the specific implementation manners of the embodiment of the present application.
In a possible design, in a seventh implementation manner of the second aspect of the embodiment of the present application, the FOV correspondence is a FOV correspondence table between the first camera and the second camera.
The embodiment of the application specifically describes the existence mode of the FOV corresponding relation, and adds the specific implementation modes of the embodiment.
In one possible design, in an eighth implementation manner of the second aspect of the embodiment of the present application, at least one of the first focus information and the second focus information is obtained by any one of: a phase difference focusing system, a laser focusing system, or a contrast focusing system.
The embodiment of the present application illustrates a focusing manner of the first focusing information and the second focusing information, which increases operability of the embodiment of the present application.
A third aspect of the embodiments of the present application provides a terminal for implementing focusing of two cameras, where the two cameras include a first camera and a second camera, including:
a processor and a memory;
the memory stores operating instructions;
the processor is used for executing the following steps by calling the operation instruction stored in the memory:
acquiring first focusing information of a first camera and second focusing information of a second camera;
determining a first target field of view (FOV) of the first camera according to the first focusing information, and determining a first selectable FOV of the second camera according to the second focusing information;
determining a second optional FOV of the second camera according to the FOV corresponding relation of the first camera and the second camera and the first target FOV;
determining a second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to a first confidence in the first focus information and a second confidence in the second focus information, the first confidence characterizing a degree of reliability of the first focus information and the second confidence characterizing a degree of reliability of the second focus information;
and controlling the first camera to move to the position corresponding to the first target FOV and controlling the second camera to move to the position corresponding to the second target FOV.
The terminal in the embodiment of the application realizes the linkage adjustment of the two cameras according to the corresponding relation of the first camera and the second camera, so that the focusing efficiency of the cameras can be improved, and the user experience is improved.
Yet another aspect of the present application provides a computer-readable storage medium having stored therein instructions, which when executed on a computer, cause the computer to perform the method of the above-described aspects.
Yet another aspect of the present application provides a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of the above-described aspects.
Drawings
FIG. 1 is a schematic diagram of a camera acquiring a FOV in an embodiment of the present invention;
FIG. 2 is a diagram of camera position versus FOV in an embodiment of the present invention;
fig. 3 is a frame diagram of a terminal in an embodiment of the present invention;
FIG. 4 is a schematic diagram of an embodiment of a focusing method for two cameras in an embodiment of the present invention;
FIG. 5 is a schematic diagram of another embodiment of a focusing method for two cameras in an embodiment of the present invention;
fig. 6 is a schematic diagram of an embodiment of a terminal in an embodiment of the present invention;
fig. 7 is a schematic diagram of another embodiment of the terminal in the embodiment of the present invention.
Detailed Description
The embodiment of the application provides a focusing method and a terminal of double cameras, which are used for improving the focusing efficiency of the cameras.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1, the terminal in the embodiment of the present invention may acquire a field of view (FOV) through a camera. The FOV is defined as: the camera of the optical instrument is used as a vertex, and the object image of the measured object can pass through an included angle formed by two edges of the lens in the maximum range. The angle aob as shown in fig. 1 is the field angle. The size of the field angle determines the field range of the optical instrument, the larger the field angle is, the larger the field is, and the smaller the optical magnification is, so that the target object beyond the angle cannot be captured in the lens.
As shown in fig. 2, the FOV changes due to the change in the position of the camera, and when the camera moves to the left, the FOV decreases, and when the camera moves to the right, the FOV increases. When the camera is pushed from macro to infinity, the FOV is monotonically non-linearly increasing.
A frame diagram of a terminal in an embodiment of the present invention is shown in fig. 3, where the terminal includes: network device 301, input unit 302, I/O interface 303, display unit 304, expansion card 305, processor 306, image processing unit 307, power supply 308, RF circuit 309, memory 310, camera 311, and the like. Those skilled in the art will appreciate that the handset configuration shown in fig. 3 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The functions of the various constituent elements of the terminal are described below in conjunction with fig. 3:
An input unit 302 for receiving input numeric or character information and generating key signal inputs related to user settings and function control of the cellular phone. The input unit 302 may be a keyboard, a touch panel, or the like.
I/O port 303, responsible for implementing processor 306 to connect I/O circuitry and peripherals together via a system bus. The peripheral circuit can be various peripherals, such as an earphone or a charger.
A display unit 304 for displaying information input by the user or information provided to the user and various menus of the mobile phone. The display unit 304 may be various types of screens including a touch screen.
The audio circuit 305 may convert the received audio data into an electrical signal.
And the processor 306 is used for controlling the operation of the terminal and executing operation processing operations required by various terminals.
An image processing unit 307 for processing image information acquired by the camera 311 or executing an image processing instruction of a user.
And a power supply 308 for supplying power to the terminal. For example, power supply 308 is a power management circuit.
The RF circuit 309, which is used for receiving and transmitting wireless signals during information transmission and reception or a call, may be operated in cooperation with the network device 301. The wireless signal may be a cellular wireless signal or a short-range wireless signal, etc.
A memory 310 for storing software programs and modules.
The camera 311 is used to acquire information such as an image, for example, to perform shooting.
The following describes in detail a focusing method of two cameras in this embodiment, where the two cameras include a first camera and a second camera;
referring to fig. 4, an embodiment of a focusing method for two cameras in the embodiment of the present invention includes:
401. first focusing information of the first camera and second focusing information of the second camera are obtained.
In this embodiment, when the first camera and/or the second camera capture image information to generate (for example, the cameras are started, binocular synthesis, blurring function is turned on, and the like), or a device such as a gyroscope inside the terminal senses that the terminal rotates, the processor in the terminal may instruct the first camera and the second camera to start focusing, acquire first focusing information through the first camera, and acquire second focusing information through the second camera, where the focusing information is used for focusing the cameras, and the focusing information includes a confidence level, a FOV difference value, a current FOV value, and the like, where the confidence level is used to represent a reliability degree of the focusing information.
The focus information may be generated by a camera, which may include optics, mechanical structures (e.g., motors), sensors, and processing circuitry. Specifically, the processing circuit may generate focusing information according to a preset manner. The generation algorithm or manner of the focusing information may be provided by the camera providers existing on the market, and different camera providers generally have different technologies to realize the generation of the focusing information. For example, at least one of the first focus information and the second focus information of the present embodiment is obtained by any one of: a phase difference focusing system, a laser focusing system, or a contrast focusing system.
402. And acquiring a first current FOV and a first FOV difference value of the first camera from the first focus information, and acquiring a second current FOV and a second FOV difference value from the second focus information.
In this embodiment, when the terminal acquires first focus information of the first camera and second focus information of the second camera, a first current FOV of the first camera is extracted from the first focus information, a first FOV difference of the first camera is acquired according to the first focus information through a preset focusing algorithm, a second current FOV of the second camera is extracted from the second focus information, and a second FOV difference of the second camera is acquired according to the second focus information through a preset focusing algorithm, where the FOV difference can be acquired according to the focusing information by the focusing algorithm;
403. and determining a first target FOV of the first camera according to the first current FOV and the first FOV difference value, and determining a first selectable FOV of the second camera according to the second current FOV and the second FOV difference value.
In this embodiment, after acquiring the first current FOV, the first FOV difference, the second current FOV, and the second FOV difference, the terminal determines the target FOV of the first camera, that is, the first target FOV, according to the first current FOV and the first FOV difference, and determines the first selectable FOV of the second camera according to the second current FOV and the second FOV difference, where the method for determining the first target FOV includes: α ═ β + χ, where α is the first target FOV, β is the first current FOV, χ is the first FOV difference; the method of determining the first selectable FOV is: + phi, where is the first selectable FOV, is the second current FOV, and phi is the second FOV difference.
404. And determining a second optional FOV of the second camera according to the FOV corresponding relation of the first camera and the second camera and the first target FOV.
In this embodiment, the FOV correspondence between the first camera and the second camera is a FOV correspondence table between the first camera and the second camera, where the FOV correspondence table stores a correspondence between the FOV and the camera position in addition to the FOV correspondence between the first camera and the second camera, and the FOV correspondence table is shown in table 1:
table 1:
when the first target FOV of the first camera is 71, it can be known by looking up the FOV corresponding table that the FOV of the second camera is 68.5 at this time, i.e. the second optional FOV is 68.5.
405. When the difference between the first selectable FOV and the second selectable FOV is greater than a preset value, a second target FOV is determined from the first selectable FOV and the second selectable FOV according to the first confidence and the second confidence.
In this embodiment, when the value of the first selectable FOV of the second camera is 67 according to the second focusing information, the FOV correspondence relationship between the first camera and the second camera, and the second selectable FOV of the second camera is 68.5 according to the first target FOV, the difference between the first selectable FOV and the second selectable FOV is 1.5, if the preset value is 1, since 1.5 is greater than 1, it is inevitable that one FOV is not appropriate in the first selectable FOV and the second selectable FOV, at this time, the second target FOV of the second camera needs to be determined from the first selectable FOV and the second selectable FOV according to the first confidence coefficient in the first focusing information and the second confidence coefficient in the second focusing information, where the first confidence coefficient is used to characterize the reliability degree of the first focusing information, and the second confidence coefficient is used to characterize the reliability degree of the second focusing information.
If the focusing mode of the first focusing information is the same as that of the second focusing information, selecting an optional FOV corresponding to the larger one of the first confidence coefficient and the second confidence coefficient as a target FOV of the second camera, namely a second target FOV; however, if the values of the first confidence degree and the second confidence degree are both lower than the threshold (the confidence degree lower than the threshold indicates that the reliability of the focusing information is low), the terminal needs to acquire the focusing information corresponding to other focusing modes of the first camera and/or the second camera, and then executes steps 401 to 406 according to the acquired focusing information.
If the focusing mode of the first focusing information is different from the focusing mode of the second focusing information, selecting an optional FOV corresponding to the confidence coefficient with higher priority in the first confidence coefficient and the second confidence coefficient as a second target FOV according to the priority sequence of the focusing modes, wherein the priority sequence of the focusing modes is that the phase difference focusing mode is greater than the laser focusing mode and is greater than the contrast focusing mode, namely, if the focusing mode of the first focusing information is the phase difference focusing mode and the focusing mode of the second focusing information is the laser focusing mode, and as the priority of the phase difference focusing mode is higher than the priority of the laser focusing mode, selecting the optional FOV corresponding to the first focusing information as the target FOV of the second camera, namely the second target FOV,
it should be noted that, in this embodiment, at least one camera corresponds to at least two focusing modes. For example, the focusing system may be a phase difference focusing system, a laser focusing system, or a contrast focusing system. Each focusing method is a well-established conventional technique in the art, and the present embodiment will not be described in detail.
It should be noted that the preset value mentioned in this embodiment is determined according to the characteristics of the camera, and may be 1, or may be other values, and is not limited herein.
It should be noted that, if the difference between the first selectable FOV and the second selectable FOV is not greater than the preset value, the terminal may directly determine the first selectable FOV as the second target FOV of the second camera.
406. And controlling the first camera to move to the position corresponding to the first target FOV and controlling the second camera to move to the position corresponding to the second target FOV.
In this embodiment, the first camera is equipped with a first motor, the second camera is equipped with a second motor, and after the terminal determines the first target FOV and the second target FOV, the terminal controls the first motor to push the first camera to a position corresponding to the first target FOV and controls the second motor to push the second camera to a position corresponding to the second target FOV, where the corresponding relationship between the FOVs and the cameras is shown in table 1 in step 403.
It should be noted that the first camera and the second camera in this embodiment may be interchanged.
In this embodiment, according to the FOV corresponding relation of first camera and second camera, realize the linkage adjustment of two cameras, reduced unsuitable search step, can improve the focusing efficiency of camera to promote user experience.
Referring to fig. 5, another embodiment of the focusing method for dual cameras in the embodiment of the present invention includes:
501. first focusing information of the first camera and second focusing information of the second camera are obtained.
502. And acquiring a first current FOV and a first FOV difference value of the first camera from the first focus information, and acquiring a second current FOV and a second FOV difference value from the second focus information.
503. And determining a first target FOV of the first camera according to the first current FOV and the first FOV difference value, and determining a first selectable FOV of the second camera according to the second current FOV and the second FOV difference value.
504. And determining a second optional FOV of the second camera according to the FOV corresponding relation of the first camera and the second camera and the first target FOV.
In this embodiment, steps 501 to 504 are similar to steps 401 to 404 in fig. 4, and are not described herein again.
505. When the direction of movement indicated by the first selectable FOV is opposite to the direction of movement indicated by the second selectable FOV, the second target FOV is determined from the first selectable FOV and the second selectable FOV according to the first confidence and the second confidence.
In this embodiment, when the terminal obtains, through the correspondence between the FOVs and the camera positions, that the moving direction indicated by the first selectable FOV is opposite to the moving direction indicated by the second selectable FOV (for example, when the second camera moves to the left according to the first selectable FOV and moves to the right according to the second selectable FOV), it is described that the focusing strategies of the first camera and the second camera are contradictory, and it is determined that one focusing direction is wrong, and then a second target FOV is determined from the first selectable FOV and the second selectable FOV according to a first confidence coefficient and a second confidence coefficient, where the first confidence coefficient is used to represent the reliability of the first focusing information, and the second confidence coefficient is used to represent the reliability of the second focusing information.
If the focusing mode of the first focusing information is the same as that of the second focusing information, selecting an optional FOV corresponding to the larger one of the first confidence coefficient and the second confidence coefficient as a target FOV of the second camera, namely a second target FOV; however, if the values of the first confidence degree and the second confidence degree are both lower than the threshold (the confidence degree lower than the threshold indicates that the reliability of the focusing information is low), the terminal needs to acquire the focusing information corresponding to other focusing modes of the first camera and/or the second camera, and then executes steps 401 to 406 according to the acquired focusing information.
If the focusing mode of the first focusing information is different from the focusing mode of the second focusing information, selecting an optional FOV corresponding to the confidence coefficient with higher priority in the first confidence coefficient and the second confidence coefficient as a second target FOV according to the priority sequence of the focusing modes, wherein the priority sequence of the focusing modes is that the phase difference focusing mode is greater than the laser focusing mode and is greater than the contrast focusing mode, namely, if the focusing mode of the first focusing information is the phase difference focusing mode and the focusing mode of the second focusing information is the laser focusing mode, and as the priority of the phase difference focusing mode is higher than the priority of the laser focusing mode, selecting the optional FOV corresponding to the first focusing information as the target FOV of the second camera, namely the second target FOV,
it should be noted that, in this embodiment, at least one camera corresponds to at least two focusing modes. For example, the focusing system may be a phase difference focusing system, a laser focusing system, or a contrast focusing system. Each focusing method is a well-established conventional technique in the art, and the present embodiment will not be described in detail.
It should be noted that the preset value mentioned in this embodiment is determined according to the characteristics of the camera, and may be 1, or may be other values, and is not limited herein.
506. And controlling the first camera to move to the position corresponding to the first target FOV and controlling the second camera to move to the position corresponding to the second target FOV.
In this embodiment, step 506 is similar to step 406 in fig. 4, and detailed description thereof is omitted here.
In this embodiment, according to the FOV corresponding relation of first camera and second camera, realize the linkage adjustment of two cameras, the camera can reduce the search direction of camera mistake, improves the efficiency of focusing of camera to promote user experience.
In the above embodiment, the confidence is included in the focusing information, the focusing information is generated during the focusing process of the camera, the camera may form corresponding focusing information through different focusing manners, such as the phase difference focusing manner, the laser focusing manner, or the contrast focusing manner mentioned above, except that the focusing information obtained by different focusing manners may be different, and the confidence in the focusing information corresponding to any one focusing manner is used to represent the reliability of the focusing information. In each focusing mode, the process of acquiring the focusing information including the confidence level is the prior art, and may be provided by the camera provider, which is not in the scope of the present disclosure.
The above embodiment describes in detail the focusing method of the dual cameras in the embodiment of the present invention, and the following describes in detail the terminal in the embodiment of the present invention.
Referring to fig. 6, fig. 6 is a diagram illustrating an embodiment of a terminal for implementing dual-camera focusing according to an embodiment of the present invention, where the dual cameras include a first camera and a second camera, and the embodiment includes:
an obtaining unit 601, configured to obtain first focus information of a first camera and second focus information of a second camera;
a first determining unit 602, configured to determine a first target field of view FOV of the first camera according to the first focus information, and determine a first selectable FOV of the second camera according to the second focus information;
a second determining unit 603, configured to determine a second selectable FOV of the second camera according to the FOV corresponding relationship between the first camera and the second camera and the first target FOV;
a third determining unit 604, configured to determine a second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to a first confidence in the first focus information and a second confidence in the second focus information, where the first confidence is used to characterize a reliability degree of the first focus information, and the second confidence is used to characterize a reliability degree of the second focus information;
and a control unit 605 for controlling the first camera to move to a position corresponding to the first target FOV and controlling the second camera to move to a position corresponding to the second target FOV.
Referring to fig. 7, fig. 7 is another embodiment of a terminal for implementing dual-camera focusing according to an embodiment of the present invention, where the dual cameras include a first camera and a second camera, the embodiment includes:
an obtaining unit 701, configured to obtain first focus information of a first camera and second focus information of a second camera;
a first determining unit 702, configured to determine a first target field of view FOV of the first camera according to the first focus information, and determine a first selectable FOV of the second camera according to the second focus information;
wherein the first determining unit 702 includes:
a first obtaining subunit 7021, configured to obtain a first current FOV and a first FOV difference of the first camera from the first focus information, and obtain a second current FOV and a second FOV difference from the second focus information;
a third determining subunit 7022, configured to determine the first target FOV of the first camera according to the first current FOV and the first FOV difference, and determine the first selectable FOV of the second camera according to the second current FOV and the second FOV difference.
A second determining unit 703, configured to determine a second selectable FOV of the second camera according to the FOV correspondence between the first camera and the second camera and the first target FOV;
a third determining unit 704, configured to determine a second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to a first confidence in the first focus information and a second confidence in the second focus information, where the first confidence is used to characterize a reliability degree of the first focus information, and the second confidence is used to characterize a reliability degree of the second focus information;
wherein the third determining unit 704 further includes:
a first determining subunit 7041, configured to determine, when a difference between the first selectable FOV and the second selectable FOV is greater than a preset value, a second target FOV from the first selectable FOV and the second selectable FOV according to the first confidence degree and the second confidence degree;
and/or the presence of a gas in the gas,
a second determining subunit 7042, configured to determine a second target FOV from the first selectable FOV and the second selectable FOV according to the first confidence degree and the second confidence degree when the moving direction indicated by the first selectable FOV is opposite to the moving direction indicated by the second selectable FOV.
And/or the presence of a gas in the gas,
a first selecting subunit 7043, configured to select, when the focusing manner of the first focusing information is the same as the focusing manner of the second focusing information, an optional FOV corresponding to the greater of the first confidence degree and the second confidence degree as the second target FOV;
and/or the presence of a gas in the gas,
a second selecting subunit 7044, configured to, when the focusing manner of the first focusing information is different from the focusing manner of the second focusing information, select, according to the priority ranking of the focusing manners, an optional FOV corresponding to the confidence coefficient with a higher priority from among the first confidence coefficient and the second confidence coefficient as the second target FOV.
The control unit 705 is configured to control the first camera to move to a position corresponding to the first target FOV and control the second camera to move to a position corresponding to the second target FOV.
A fourth determining unit 706, configured to determine the first selectable FOV as the second target FOV of the second camera when a difference between the first selectable FOV and the second selectable FOV is not greater than a preset value.
It should be noted that, in the above embodiments, all or part of the embodiments may be implemented by software, hardware, firmware or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the invention are brought about in whole or in part when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, e.g., the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. A computer-readable storage medium may be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In one example, in the terminal embodiments corresponding to fig. 6 and 7, the terminal may be a software device or a hardware device for implementing the focusing of the two cameras. When the terminal is a hardware device that does not require software to run, each unit or sub-unit mentioned above may be an integrated circuit unit for implementing the relevant operation processing, i.e., each unit or sub-unit includes a circuit structure required by the integrated circuit, such as a digital circuit, a gate circuit, an analog circuit, or the like.
In another example, the terminal may include hardware to execute software instructions, see fig. 3. The processor 306 and memory 310 of fig. 3 will now be described in detail.
The memory 310 stores any one of the following elements, such as executable modules or data structures, or subsets thereof, or expanded sets thereof:
and (3) operating instructions: comprises various coded operation instructions for realizing various operations;
operating the system: including various system programs for implementing various basic services and for handling hardware-based tasks.
In the embodiment of the present invention, the memory 310 may store an instruction program, and the processor 306 is configured to read the instruction program and execute, under driving of the instruction program:
acquiring first focusing information of a first camera and second focusing information of a second camera;
determining a first target field of view (FOV) of the first camera according to the first focusing information, and determining a first selectable FOV of the second camera according to the second focusing information;
determining a second optional FOV of the second camera according to the FOV corresponding relation of the first camera and the second camera and the first target FOV;
determining a second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to a first confidence coefficient in the first focus information and a second confidence coefficient in the second focus information, wherein the first confidence coefficient is used for representing the reliability degree of the first focus information, and the second confidence coefficient is used for representing the reliability degree of the second focus information;
and controlling the first camera to move to the position corresponding to the first target FOV and controlling the second camera to move to the position corresponding to the second target FOV.
The processor 306 controls the operation of the terminal 8, and the processor 306 may be, for example, a Central Processing Unit (CPU). The processor 306 in the terminal 8 may be an integrated Image Signal processor (Image Signal processor, ISP), or an external ISP, and the processor 306 controls the ISP to perform Image Signal processing operation. In another embodiment, the processor 306 may be an ISP.
The method disclosed in the above embodiments of the present invention may be applied to the processor 306, or implemented by the processor 306. The processor 306 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 306. The processor 306 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 310, and the processor 306 reads the information in the memory 310 and completes the steps of the method in combination with the hardware.
The description related to fig. 3 can be understood with reference to the description related to the method portion of fig. 4 and 5 and the effect thereof, which are not described herein in detail.
It should be noted that, the terminal mentioned in the embodiments of the present application may be a processor in a mobile phone terminal or the mobile phone terminal itself.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (19)
1. A focusing method of double cameras is characterized in that the double cameras comprise a first camera and a second camera, and the method comprises the following steps:
acquiring first focusing information of a first camera and second focusing information of a second camera;
determining a first target field of view (FOV) of the first camera according to the first focusing information, and determining a first selectable FOV of the second camera according to the second focusing information;
determining a second optional FOV of the second camera according to the FOV corresponding relation of the first camera and the second camera and the first target FOV;
determining a second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to a first confidence in the first focus information and a second confidence in the second focus information, the first confidence characterizing a degree of reliability of the first focus information and the second confidence characterizing a degree of reliability of the second focus information;
and controlling the first camera to move to the position corresponding to the first target FOV and controlling the second camera to move to the position corresponding to the second target FOV.
2. The method of claim 1, wherein the determining a second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to a first confidence in the first focus information and a second confidence in the second focus information comprises:
determining the second target FOV from the first selectable FOV and the second selectable FOV according to the first confidence and the second confidence when a difference between the first selectable FOV and the second selectable FOV is greater than a preset value.
3. The method of claim 2, further comprising:
determining the first selectable FOV as a second target FOV for the second camera if the difference between the first selectable FOV and the second selectable FOV is not greater than the preset value.
4. The method of claim 1, wherein the determining a second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to a first confidence in the first focus information and a second confidence in the second focus information comprises:
determining the second target FOV from the first selectable FOV and the second selectable FOV according to the first confidence and the second confidence when the direction of movement indicated by the first selectable FOV is opposite to the direction of movement indicated by the second selectable FOV.
5. The method of any of claims 1-4, wherein determining the first target FOV for the first camera from the first focus information and determining the first selectable FOV for the second camera from the second focus information comprises:
acquiring a first current FOV and a first FOV difference value of the first camera from the first focus information, and acquiring a second current FOV and a second FOV difference value from the second focus information;
a first target FOV of the first camera is determined from the first current FOV and the first FOV difference, and a first selectable FOV of the second camera is determined from the second current FOV and the second FOV difference.
6. The method of any of claims 1-4, wherein the determining a second target FOV for the second camera from the first selectable FOV and the second selectable FOV according to a first confidence in the first focus information and a second confidence in the second focus information comprises:
and if the focusing mode of the first focusing information is the same as that of the second focusing information, selecting the selectable FOV corresponding to the larger one of the first confidence coefficient and the second confidence coefficient as the second target FOV.
7. The method of any of claims 1-4, wherein the determining a second target FOV for the second camera from the first selectable FOV and the second selectable FOV according to a first confidence in the first focus information and a second confidence in the second focus information comprises:
and if the focusing mode of the first focusing information is different from the focusing mode of the second focusing information, selecting an optional FOV corresponding to the confidence coefficient with higher priority in the first confidence coefficient and the second confidence coefficient as the second target FOV according to the priority sequence of the focusing modes.
8. The method of any of claims 1-4, wherein the FOV correspondence is a FOV correspondence table between the first camera and the second camera.
9. The method of any of claims 1 to 4, wherein at least one of the first focus information and the second focus information is derived by any of: a phase difference focusing system, a laser focusing system, or a contrast focusing system.
10. The utility model provides a terminal for realizing two cameras are focused which characterized in that, two cameras include first camera and second camera, the terminal includes:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring first focusing information of a first camera and second focusing information of a second camera;
a first determining unit, configured to determine a first target field of view FOV of the first camera according to the first focus information, and determine a first selectable FOV of the second camera according to the second focus information;
a second determining unit, configured to determine a second selectable FOV of the second camera according to the FOV corresponding relationship between the first camera and the second camera and the first target FOV;
a third determining unit, configured to determine a second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to a first confidence in the first focus information and a second confidence in the second focus information, where the first confidence is used to characterize a reliability degree of the first focus information, and the second confidence is used to characterize a reliability degree of the second focus information;
and the control unit is used for controlling the first camera to move to the position corresponding to the first target FOV and controlling the second camera to move to the position corresponding to the second target FOV.
11. The terminal according to claim 10, wherein the third determining unit comprises:
a first determining subunit configured to determine, when a difference between the first selectable FOV and the second selectable FOV is greater than a preset value, the second target FOV from the first selectable FOV and the second selectable FOV according to the first confidence degree and the second confidence degree.
12. The terminal of claim 11, wherein the terminal further comprises:
a fourth determination unit to determine the first selectable FOV as a second target FOV of the second camera when the difference between the first selectable FOV and the second selectable FOV is not greater than the preset value.
13. The terminal according to claim 11, wherein the third determining unit comprises:
a second determining subunit configured to determine the second target FOV from the first selectable FOV and the second selectable FOV according to the first confidence degree and the second confidence degree when the movement direction indicated by the first selectable FOV is opposite to the movement direction indicated by the second selectable FOV.
14. The terminal according to any of claims 11 to 13, wherein the first determining unit comprises:
a first acquiring subunit, configured to acquire a first current FOV and a first FOV difference of the first camera from the first focus information, and acquire a second current FOV and a second FOV difference from the second focus information;
a third determining subunit, configured to determine a first target FOV of the first camera according to the first current FOV and the first FOV difference, and determine a first selectable FOV of the second camera according to the second current FOV and the second FOV difference.
15. The terminal according to any of claims 11 to 13, wherein the third determining unit comprises:
a first selecting subunit, configured to select, when the focusing manner of the first focus information is the same as the focusing manner of the second focus information, an optional FOV corresponding to the greater of the first confidence degree and the second confidence degree as the second target FOV.
16. The terminal according to any of claims 11 to 13, wherein the third determining unit comprises:
and a second selecting subunit, configured to select, according to a priority ranking of focusing manners, an optional FOV corresponding to a confidence coefficient with a higher priority from the first confidence coefficient and the second confidence coefficient as the second target FOV when the focusing manner of the first focusing information is different from the focusing manner of the second focusing information.
17. The terminal of any of claims 11 to 13, wherein the FOV correspondence is a FOV correspondence table between the first camera and the second camera.
18. The terminal of any of claims 11 to 13, wherein at least one of the first focus information and the second focus information is derived by any of: a phase difference focusing system, a laser focusing system, or a contrast focusing system.
19. The utility model provides a terminal for realizing two cameras are focused, a serial communication port, two cameras include first camera and second camera, include:
a processor and a memory;
the memory stores operating instructions;
the processor is used for executing the following steps by calling the operation instruction stored in the memory:
acquiring first focusing information of a first camera and second focusing information of a second camera;
determining a first target field of view (FOV) of the first camera according to the first focusing information, and determining a first selectable FOV of the second camera according to the second focusing information;
determining a second optional FOV of the second camera according to the FOV corresponding relation of the first camera and the second camera and the first target FOV;
determining a second target FOV of the second camera from the first selectable FOV and the second selectable FOV according to a first confidence in the first focus information and a second confidence in the second focus information, the first confidence characterizing a degree of reliability of the first focus information and the second confidence characterizing a degree of reliability of the second focus information;
and controlling the first camera to move to the position corresponding to the first target FOV and controlling the second camera to move to the position corresponding to the second target FOV.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710492055.2A CN109120839B (en) | 2017-06-23 | 2017-06-23 | Focusing method of double cameras and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710492055.2A CN109120839B (en) | 2017-06-23 | 2017-06-23 | Focusing method of double cameras and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109120839A CN109120839A (en) | 2019-01-01 |
CN109120839B true CN109120839B (en) | 2020-10-16 |
Family
ID=64733914
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710492055.2A Active CN109120839B (en) | 2017-06-23 | 2017-06-23 | Focusing method of double cameras and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109120839B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109788199B (en) * | 2019-01-30 | 2021-02-05 | 上海创功通讯技术有限公司 | Focusing method suitable for terminal with double cameras |
CN111491105B (en) * | 2020-04-24 | 2021-07-27 | Oppo广东移动通信有限公司 | Focusing method of mobile terminal, mobile terminal and computer storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104822019A (en) * | 2015-03-31 | 2015-08-05 | 深圳市莫孚康技术有限公司 | Method for calculating camera field angle |
CN105847693A (en) * | 2016-04-27 | 2016-08-10 | 乐视控股(北京)有限公司 | Shooting method and system based on two-camera focusing |
CN106341598A (en) * | 2016-09-14 | 2017-01-18 | 北京环境特性研究所 | Optical-lens automatic focusing system and method thereof |
CN106502027A (en) * | 2016-11-22 | 2017-03-15 | 宇龙计算机通信科技(深圳)有限公司 | A kind of dual camera module and smart machine |
CN106534696A (en) * | 2016-11-29 | 2017-03-22 | 努比亚技术有限公司 | Focusing apparatus and method |
CN106851103A (en) * | 2017-02-27 | 2017-06-13 | 上海兴芯微电子科技有限公司 | Atomatic focusing method and device based on dual camera system |
-
2017
- 2017-06-23 CN CN201710492055.2A patent/CN109120839B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104822019A (en) * | 2015-03-31 | 2015-08-05 | 深圳市莫孚康技术有限公司 | Method for calculating camera field angle |
CN105847693A (en) * | 2016-04-27 | 2016-08-10 | 乐视控股(北京)有限公司 | Shooting method and system based on two-camera focusing |
CN106341598A (en) * | 2016-09-14 | 2017-01-18 | 北京环境特性研究所 | Optical-lens automatic focusing system and method thereof |
CN106502027A (en) * | 2016-11-22 | 2017-03-15 | 宇龙计算机通信科技(深圳)有限公司 | A kind of dual camera module and smart machine |
CN106534696A (en) * | 2016-11-29 | 2017-03-22 | 努比亚技术有限公司 | Focusing apparatus and method |
CN106851103A (en) * | 2017-02-27 | 2017-06-13 | 上海兴芯微电子科技有限公司 | Atomatic focusing method and device based on dual camera system |
Also Published As
Publication number | Publication date |
---|---|
CN109120839A (en) | 2019-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20150026632A1 (en) | Portable electronic device and display control method | |
CN107395898B (en) | Shooting method and mobile terminal | |
CN106060406B (en) | Photographing method and mobile terminal | |
US9692978B2 (en) | Image capturing apparatus, control method therefor, and storage medium | |
KR101795040B1 (en) | Method for installing applications in a portable terminal | |
JP6259095B2 (en) | Method and terminal for acquiring panoramic image | |
US11003351B2 (en) | Display processing method and information device | |
US11647132B2 (en) | Communication terminal, method for controlling communication terminal, communication system, and storage medium | |
JP2015524954A (en) | Multiple display method using a plurality of communication terminals, machine-readable storage medium, and communication terminal | |
KR20100008936A (en) | Portable terminal having dual camera and photographing method using the same | |
US11082600B2 (en) | Electronic apparatus that performs wireless communication with an image capturing device at two different communication speeds, and method for controlling same | |
KR20130023074A (en) | Method and apparatus for performing video communication in a mobile terminal | |
US20200059597A1 (en) | Shooting method and terminal | |
US20150006183A1 (en) | Electronic device, control method by electronic device, and computer readable recording medium | |
CN113141450A (en) | Shooting method, shooting device, electronic equipment and medium | |
CN113364976B (en) | Image display method and electronic equipment | |
CN109120839B (en) | Focusing method of double cameras and terminal | |
CN108459882B (en) | Electronic device and control method thereof | |
JP2013131219A (en) | Screen edition device of portable terminal and the method thereof | |
CN112165576A (en) | Image display method, image display device, storage medium and electronic equipment | |
US9445000B2 (en) | Communication apparatus, control method therefor, and program | |
WO2018168357A1 (en) | Image-capture device, image-capture method, and image-capture program | |
JP2007141064A (en) | Portable terminal and menu display switching method | |
US10333783B2 (en) | Data processing apparatus, communication apparatus, and control methods for the same | |
EP2843932B1 (en) | Device and method for making quick change to playback mode after photographing subject |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |