CN114554085A - Focusing method and device, electronic equipment and storage medium - Google Patents

Focusing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114554085A
CN114554085A CN202210119340.0A CN202210119340A CN114554085A CN 114554085 A CN114554085 A CN 114554085A CN 202210119340 A CN202210119340 A CN 202210119340A CN 114554085 A CN114554085 A CN 114554085A
Authority
CN
China
Prior art keywords
lens
focal length
focusing
target
lens area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210119340.0A
Other languages
Chinese (zh)
Inventor
黄春成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210119340.0A priority Critical patent/CN114554085A/en
Publication of CN114554085A publication Critical patent/CN114554085A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics

Abstract

The application discloses a focusing method, a focusing device, electronic equipment and a storage medium, wherein the method is applied to the focusing device, the focusing device comprises a camera module, the camera module comprises a liquid crystal lens, and the method comprises the following steps: acquiring target focusing parameters of at least two shot objects with different depths of field; and applying a voltage to the lens area corresponding to each shooting object based on the target focusing parameter, wherein the voltages applied to the lens areas corresponding to the shooting objects with different depths of field are different.

Description

Focusing method and device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of camera shooting, and particularly relates to a focusing method, a focusing device, electronic equipment and a storage medium.
Background
With the gradual development of electronic devices, the functions of the electronic devices are more and more. One of the indispensable functions of the electronic equipment is realized through setting up the camera module. In order to meet the requirements of users, the electronic device can perform full-depth-of-field picture synthesis through multiple pictures with different depths of field when shooting pictures, but the picture synthesis has higher requirements on the pictures with different depths of field and has slow synthesis speed, so that the time for obtaining the full-depth picture is too long.
Disclosure of Invention
The embodiment of the application aims to provide a focusing method, a focusing device, electronic equipment and a storage medium, so that a shot object in the whole lens imaging interface is clearly focused, and further the time length for obtaining a panoramic deep picture is shortened.
In a first aspect, an embodiment of the present application provides a focusing method applied to a focusing device, where the focusing device includes a camera module, the camera module includes a liquid crystal lens, and the method includes:
acquiring target focusing parameters of at least two shot objects with different depths of field;
and applying a voltage to the lens area corresponding to each shooting object based on the target focusing parameter, wherein the voltages applied to the lens areas corresponding to the shooting objects with different depths of field are different.
In a second aspect, an embodiment of the present application provides a focusing device, the focusing device includes a camera module, the camera module includes a liquid crystal lens, the device further includes:
the acquisition module is used for acquiring target focusing parameters of at least two shooting objects with different depths of field;
and the voltage applying module is used for applying voltages to the lens areas corresponding to the shooting objects based on the target focusing parameters, wherein the voltages applied to the lens areas corresponding to the shooting objects with different depths of field are different.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a camera module, a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the camera module includes a liquid crystal lens, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the method according to the first aspect.
In the embodiment of the application, target focusing parameters of at least two shooting objects with different depths of field are obtained; based on the target focusing parameters, voltages are applied to the lens areas corresponding to the shot objects, wherein the voltages applied to the lens areas corresponding to the shot objects with different depths of field are different, so that the shot objects in the whole lens imaging interface are focused clearly, that is, full depth of field focusing can be realized, and the time for obtaining the panoramic deep picture is shortened.
Drawings
FIG. 1 is a schematic view of captured images at different depths of field in an example scene to which the present application relates;
fig. 2 is a schematic view of a panoramic depth image obtained by synthesizing the shot images of different depths according to fig. 1;
fig. 3 is a schematic structural diagram of a liquid crystal lens according to an embodiment of the present application;
FIG. 4 is a schematic liquid crystal state diagram of a cross section of the liquid crystal lens of FIG. 3 along a direction perpendicular to the orthographic projection direction of the liquid crystal lens in an embodiment of the present application;
FIG. 5 is a schematic diagram of another liquid crystal state of the liquid crystal lens of FIG. 3 along a cross section perpendicular to the orthographic projection direction of the liquid crystal lens in the embodiment of the present application;
FIG. 6 is a top view of the liquid crystal lens of FIG. 3 of the present application;
FIG. 7 is a schematic diagram of a liquid crystal lens divided into 9 alternative regions in the embodiment of the present application;
FIG. 8 is a flowchart illustrating a focusing method according to an embodiment of the present application;
FIG. 9 is a schematic structural diagram of a focusing device according to another embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to still another embodiment of the present application;
fig. 11 is a hardware configuration diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The focusing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The main device of the electronic device for implementing the Camera function is a Camera Module (CCM). The camera module comprises a lens, a sensor, a soft board, an image processing chip and other components, and the working principle is that light collected by an object through the lens is converted into an electrical signal through a CMOS (Complementary Metal Oxide Semiconductor) or CCD (Charge-coupled Device) integrated circuit, and then the electrical signal is converted into a digital image signal through an internal image processor and is output to a digital signal processor for processing, so that the image signal converted into a standard format is obtained.
The lens has important influence on the imaging effect, and the imaging principle is utilized, so that during imaging, scene light passes through the lens to form a clear image on a focusing plane, and finally the image of the scene is recorded through a photosensitive material CMOS or CCD photoreceptor.
The lens barrel is generally composed of several lenses, which are optical elements made of transparent materials and having a surface that is a part of a spherical surface, and mainly include plastic lenses and glass lenses. At present, most electronic devices use a combination of a plastic lens and a glass lens to form a lens.
As described in the background art, in the process of using the electronic device, there is a need for taking a panoramic deep picture, and all pictures of the panoramic deep picture are in focus, that is, the picture has no background blurring effect, and all objects in the picture are clear.
At present, the panoramic depth picture is mainly obtained by synthesizing a plurality of pictures with different depths of field through a full-depth picture. Illustratively, please refer to fig. 1 and fig. 2 together, wherein the left side of fig. 1 represents a close-focus image a in an exemplary scene, a triangle formed by a dotted line in the image a is a long-range shot object, and a triangle formed by a solid line is a close-range shot object, when the close-range shot object in the image a is in focus. The right side of fig. 1 shows a distant view focused image B in the same example scene, where a triangle formed by a dotted line in the image B is a near view shot object, and a triangle formed by a solid line is a distant view shot object, and at this time, the distant view shot object in the image B is clearly focused. That is, the depth of field of the image a and the depth of field of the image B are different, and the focal length/definition of the same display object is different. After the panoramic depth picture synthesis is performed by using the image a and the image B, a panoramic depth picture with each object clearly displayed can be obtained, namely, fig. 2.
However, in practical use, the viewing angles of the pictures at different focal lengths are different, which causes the requirements of the panoramic depth picture synthesis on the pictures to be higher, the time spent for shooting the pictures at different depths of field is long, and the picture synthesis is complex, so the time spent for acquiring the panoramic depth picture is long. And influenced by the factors of complex picture synthesis technology, different picture visual angles and the like, when a plurality of pictures are fused into a full-depth-of-field picture, inaccurate matching is easy to occur, further image false contours and other problems are caused, and the synthesized full-depth picture is unnatural in a depth-of-field transition region and poor in picture display effect.
In order to solve the above problem, the embodiments of the present application implement full depth of field focusing based on a liquid crystal lens. The liquid crystal lens is a novel lens which is manufactured by changing the space distribution of the refractive index of the lens and a microelectronic technology process by utilizing the electro-optic effect along with the development of the field of photoelectricity.
The liquid crystal is a special organic compound, which is between liquid and crystal under certain condition, and it can have fluidity like common liquid, and its molecular arrangement is similar to crystal, and has ordered structure tendency, so that it has crystal optical property.
Referring to fig. 3 to 6 together, fig. 3 to 6 are diagrams of the liquid crystal lens at various angles, wherein fig. 3 is a schematic structural diagram of the liquid crystal lens, fig. 4 to 5 are sectional views of the liquid crystal lens of fig. 3 along a direction perpendicular to an orthographic projection direction of the liquid crystal lens, and fig. 6 is a top view of the liquid crystal lens of fig. 3. The liquid crystal lens includes an upper glass substrate 11 and a lower glass substrate 12, and a liquid crystal layer 13 between the upper glass substrate 11 and the lower glass substrate 12.
In fig. 3 to 6, V1 is a voltage value of a portion enclosed by a circle a in fig. 3, and V0 is a voltage value of a portion other than the circle a in the liquid crystal lens. Before no voltage is applied, V0 ≠ V1, the arrangement of liquid crystal molecules in the liquid crystal layer is relatively uniform (as in fig. 4), and when a different voltage is applied (i.e. V0 ≠ V1), the arrangement of liquid crystal molecules changes (as in fig. 5).
The arrangement of the different liquid crystal molecules causes the liquid crystal lens to induce a phase change of a parabola, in which case the liquid crystal lens can be used to act as an existing optical lens. On this basis, the embodiment of the application may divide the liquid crystal lens into several candidate regions in advance, and the candidate regions may be used for adjustment of lens regions corresponding to subsequent shooting objects. It should be noted that the specific number of the candidate regions may be specifically set according to the cost of the electronic device and the requirement of the focusing fineness, and theoretically, the larger the number of the candidate regions is, the better the number is, and the size and the shape of the candidate regions may also be specifically set according to the actual requirement and the difference of the design scheme.
Exemplarily, please refer to fig. 3 to 7 together, wherein fig. 7 is a schematic diagram of the liquid crystal lens divided into 9 alternative regions. The applied voltage of each candidate area can be individually adjusted (i.e. V1 to V9), so that when at least two photographic subjects with different depths exist in a scene, different candidate areas where the photographic subjects with different depths correspond to can be determined, the candidate areas corresponding to the photographic subjects with different depths are the lens areas of the candidate areas, and then focusing of the photographic subjects with different depths can be achieved by applying different voltages to the different lens areas, that is, the photographic subjects are in clear focus in the respective lens areas, so that panoramic deep focusing can be achieved.
Specifically, based on the above structure, a focusing method according to the present application may be provided. Referring to fig. 8, fig. 8 is a flowchart illustrating a focusing method according to an embodiment of the present application, where the focusing method may include:
step 810, acquiring target focusing parameters of at least two shot objects with different depths of field;
and step 820, applying voltages to the lens areas corresponding to the shooting objects based on the target focusing parameters, wherein the voltages applied to the lens areas corresponding to the shooting objects with different depths of field are different.
In the embodiment of the application, target focusing parameters of at least two shooting objects with different depths of field are obtained; based on the target focusing parameters, voltages are applied to the lens areas corresponding to the shot objects, wherein the voltages applied to the lens areas corresponding to the shot objects with different depths of field are different, so that the shot objects in the whole lens imaging interface are focused clearly, that is, full depth of field focusing can be realized, and the time for obtaining the panoramic deep picture is shortened.
In some optional examples, the focusing method may be performed when the electronic device displays a shooting preview interface, where the shooting preview interface is an interface that displays the collected image data through a display unit of the electronic device by collecting image data of a certain range in front of a lens of a camera module after the electronic device starts the camera module. The shooting preview interface can play a role in providing previews for the user to adjust the shooting area before shooting.
Before determining the target focusing parameters of the shooting objects with different depths of field, the lens area corresponding to the shooting object in the liquid crystal lens can also be determined. The lens area is determined according to the depth of field of a shooting object in a shooting scene shot each time, and the lens area can be correspondingly changed according to the change of the shooting object in the shooting scene.
Continuing with the example of fig. 7, in scene 1, when the liquid crystal lens forms a shooting preview image, if the candidate areas indicated by V1, V2, and V3 in the liquid crystal lens are used by the shooting object with a depth of field during imaging, and the candidate areas indicated by V4 to V9 in the liquid crystal lens are used by the shooting object with B depth of field, in scene 1, the lens areas corresponding to the shooting object with a depth of field are the areas of V1, V2, and V3, and the lens areas corresponding to the shooting object with B depth of field are the areas of V4 to V9.
In scene 2, if candidate regions indicated by V1 to V6 are used in the liquid crystal lens and candidate regions indicated by V7, V8 and V9 are used in the image capture object with the C depth, the lens regions corresponding to the image capture object with the C depth are regions from V1 to V6, and the lens regions corresponding to the image capture object with the D depth are regions from V7, V8 and V9.
In some optional examples, the target focusing parameter in step 810 is a target parameter related to a lens region and a photographic subject when the photographic subject corresponding to the lens region can be brought into focus. The object focusing distance and the object focusing image distance between the shooting objects with different depths of field and the corresponding lens areas can be the target focusing image distance or the target focusing image distance, the object focusing distance and the target focusing image distance between the shooting objects with different depths of field and the corresponding lens areas can be the target focusing object distance and the image distance between the shooting objects with different depths of field and the corresponding lens areas can be the target focusing focal length of the lens areas corresponding to the shooting objects with different depths of field, and the like.
It should be noted that, under the condition that the photographic subject does not move, the depth of field of each photographic subject is different, and the target focusing focal length required to be achieved by each lens area is also different, and the target focusing object distance or the target focusing image distance to be fixed from the photographic subject to the lens area can be determined according to different focusing technologies, so as to determine the target focusing focal length. Therefore, the target focusing parameters of the shot objects with different depths of field can be obtained.
In some optional examples, in step 820, after determining the lens area corresponding to the photographic subject and obtaining the target focusing parameter of each photographic subject, a voltage may be applied to the lens area corresponding to the photographic subject according to the target focusing parameter. The magnitude of the voltage applied to each lens area can be determined according to the target focusing parameters, and the magnitudes of the voltages applied to the lens areas are different because the target focusing focal lengths of the shooting objects with different depths of field are different.
It should be noted that, as can be seen from fig. 3 to 6 and the foregoing analysis, after a voltage is applied to the lens region corresponding to each object, the arrangement of the liquid crystal molecules in each lens region changes, and the arrangement of different liquid crystal molecules causes a phase change of a parabola in each lens region, so that the focal length of the lens region is passively adjusted, and at this time, each lens region can serve as an optical lens for the corresponding object.
And because the voltage is applied to each lens area based on the target focusing parameters, the focal length of the lens area is passively adjusted, so that each shooting object is focused clearly after being imaged by the corresponding lens area, namely, the panoramic deep focusing can be realized.
Compared with the synthesis of the full-depth-of-field pictures used in the related technology, the synthesis process of the pictures with different depths of field is omitted, so that the time for obtaining the panoramic deep picture is shortened, and the problem that the time for obtaining the panoramic deep picture in the prior art is too long is solved. On the other hand, after the panoramic depth focusing is finally completed, a panoramic depth picture is obtained through shooting, and the picture is shot once, so that the picture synthesis process is omitted, the problems of unnatural depth-of-field transition areas and image false contours caused by picture synthesis, multiple times of shooting and the like can be solved, and the picture display effect is good.
Continuing with the example of fig. 1 and 2, assuming that the relatively upper triangle in the example scene is a mountain at a far distance from the scene and the relatively lower triangle is a building at a closer distance from the focusing device, only one of the photographic objects in the photographed image may be in focus before no voltage is applied to the lens area, e.g., in the schematic diagram on the left side of fig. 1, the mountain is blurred and the building is sharp. Based on the scheme of the embodiment of the application, different voltages can be applied to the corresponding lens areas according to the target focusing parameters of the remote mountain and the nearby buildings, and the final focusing displayed shot image is shown in fig. 2, wherein the remote mountain and the buildings are clear in focusing, so that the panoramic deep focusing is realized.
In order to achieve sharp focusing of the respective lens regions for the photographic subject, in some alternative examples, applying a voltage to the lens region corresponding to each photographic subject based on the target focusing parameter may include:
determining a target focusing focal length of a lens area corresponding to each shooting object according to the target focusing parameters of each shooting object; and respectively applying voltage to the lens area corresponding to each shooting object so as to enable the focal length of each lens area to be positioned in a target focal length interval related to the target focal length of each lens area.
The target focal length interval is set by considering the state of equipment or the difference among different equipment, and is considered to belong to the target focal length interval associated with the target focal length within a certain error range of the target focal length.
It should be noted that, as can be seen from the gaussian imaging formula (as shown in formula (1)), the liquid crystal lens realizes focusing by changing the focal length.
1/f=1/u+1/v (1)
Wherein f is the focal length, u is the object distance, and v is the image distance. In some embodiments of the present application, u is an object distance from a subject to a corresponding lens region, and v is an image distance from the subject to the corresponding lens region.
According to the method and the device, focusing needs to be achieved for the shooting object corresponding to each lens area, so that the target focusing focal length of each lens area can be calculated according to the obtained target focusing parameters of each shooting object under different depths of field according to a Gaussian imaging formula, the voltage applied to the corresponding lens area is determined according to the target focusing focal length, different voltages are further applied to each lens area, the focal length of each lens area is located in a target focal length interval related to the target focusing focal length of each lens area, and the focal length of each lens area meets the focusing requirement of the corresponding shooting object.
It should be noted that, since the lens region of the liquid crystal lens needs to implement the phase change of the parabola by applying a voltage, in some alternative examples, a plurality of electrodes may be disposed in the liquid crystal lens, each lens region device has at least one electrode, and the voltage is applied to the lens region corresponding to the electrode by controlling the electrode, so as to implement the change of the focal length of the lens region.
Based on this, the process of applying the voltage to the lens region corresponding to each of the photographic subjects may include: determining the voltage of each electrode arranged in each lens area according to the target focusing focal length of each lens area; and controlling each electrode arranged in each lens area to apply voltage to the corresponding lens area. In this example, by providing an electrode in each lens region and controlling the electrode to apply a voltage to the corresponding lens region, a technical basis is provided for focal length adjustment of the different lens regions.
In these examples, the target focusing focal length is determined by the target focusing parameters, and then the voltage applied to each lens region is determined according to the target focusing focal length, so that the focal length of each lens region is changed, and the clear focusing of the corresponding shot object of each lens region is realized.
In some alternative examples, the setting of the target focus parameter may be related to the focusing technique used.
For example, when the target focusing parameter is a target focusing image distance between the photographing object and the corresponding lens region, or the target focusing parameter is the target focusing image distance and an object distance between the photographing object and the corresponding lens region, focusing may be performed by using a Phase Detection Auto Focus (PDAF) technique to determine a target focusing focal distance.
The target focused image distance is a distance required between an image of the subject and the corresponding lens region when the subject is focused. In the case that the target focusing parameter includes a target focusing image distance, determining a target focusing focal length of the lens region corresponding to each photographic subject according to the target focusing parameter of each photographic subject may include:
an object distance between each photographic subject and the corresponding lens region is acquired. And calculating the target focusing focal length of each lens area according to the target focusing image distance and the object distance between each shooting object and the corresponding lens area.
As can be seen from the gaussian equation, i.e., equation (1), in order to achieve the objective of focusing on each object, the focal length of each lens region needs to reach a target focusing interval associated with a target focusing focal length. And the focus distance of the target is influenced by the object distance and the image distance when being adjusted.
Taking PDAF focusing as an example, the image distance between the imaging of the photographic subject and the lens region (i.e., the target focusing image distance) can be obtained through PDAF focusing, and the object distance between the photographic subject and the lens region is not changed before and after the voltage is applied to the lens region, so that the object distance between each photographic subject and the corresponding lens region and the target focusing image distance can be substituted into a gaussian formula to calculate the target focusing focal length.
Wherein the target has a known focal image distance and an unknown object distance. Since the object distance between each photographic subject and the corresponding lens region is not changed before and after the voltage is applied, the object distance between each photographic subject and the corresponding lens region can be obtained by acquiring the initial focal length and the initial image distance before the voltage is applied. In the initial case before the voltage is applied, the parameters between the subject and the lens region still satisfy the gaussian equation.
That is, an initial focal length and an initial image distance between each photographic subject and the corresponding lens region may be obtained, where the initial focal length is a focal length of each lens region before a voltage is applied to the lens region, and the initial image distance is an image distance corresponding to the initial focal length. And calculating the object distance between each shooting object and the corresponding lens area according to the initial focal length and the initial image distance.
The initial focal length can be calculated according to the following formula (2).
Figure BDA0003497803330000101
In the formula (2), f is a focal length, D is an aperture of a lens region in the liquid crystal lens, λ is a wavelength of light incident into the lens region, and Δ δ is a phase difference between the portion a and other portions in the lens region of fig. 3 (i.e., a voltage difference between the portion a of the circle and other portions in the liquid crystal lens).
On the basis of the initial focal length calculated according to the above formula (2), the initial image distance of the imaging of the photographic subject to the lens region in the initial case can be obtained by PD focusing. The PDAF technology applied to PD focusing is characterized in that some pixel points with phase characteristics are regularly inserted into a photosensitive element and appear in pairs, the pixel points are equivalent to human eyes, namely a first phase pixel point and a second phase pixel point, also called left/right PD pixel points, phase data can be obtained through the first phase pixel point and the second phase pixel point, a phase difference is obtained according to the phase data, and then an initial image distance is obtained.
In other examples, to obtain more accurate results, the phase data may be filtered before the phase difference is obtained, so as to reduce errors and improve data accuracy.
In the embodiments, when the target focusing parameters include the target focusing image distance, the method for determining the target focusing focal length according to the gaussian formula and the focusing technology is provided, so that the target focusing focal lengths of the shot objects with different depths of field can be accurately obtained, and a technical basis is provided for realizing the panoramic deep focusing. The voltage applied to the final lens area can generate the change of the image distance between the shooting objects with different depth of field and the corresponding lens area, so that each lens area reaches the corresponding target focusing focal length, and finally each shooting object is focused clearly when the corresponding lens area is imaged, and the panoramic deep focusing can be realized.
In other examples, other focusing manners may be used to determine the target focusing parameters as long as the focal length adjustment of the lens region corresponding to each photographic subject can be finally achieved.
For example, in some optional examples, when the target focusing parameter includes a target object distance between the photographing object and the corresponding lens region, focusing may be performed by an ietf (indirect time of flight) technique or a laser focusing technique to determine a target focusing focal length.
At this time, in order to obtain the target focal length of the photographic subjects with different depths of field, the image distance between each photographic subject and the corresponding lens area may be obtained, and the target focal length of each lens area may be calculated according to the target focal object distance and the image distance between each photographic subject and the corresponding lens area. The target focusing object distance is a distance required between a shooting object and the corresponding lens area when the shooting object is focused.
Still according to the gaussian formula, in order to achieve the purpose of focusing each shot object, it is necessary to make the focal length of each lens region reach a target focusing interval associated with the target focusing focal length. And the focus distance of the target is influenced by the object distance and the image distance when being adjusted.
Taking laser focusing as an example, the object distance between the photographic subject and the lens region (i.e. the target focused object distance) can be obtained through laser focusing, and the image distance between the photographic subject and the corresponding lens region is not changed before and after the voltage is applied to the lens region. Therefore, the image distance between each shooting object and the corresponding lens area and the target focusing object distance can be substituted into the Gaussian formula, and the target focusing focal length can be obtained through calculation.
Wherein the object focus object distance is known and the image distance is unknown. Since the image distance between each photographic subject and the corresponding lens area is unchanged before and after the voltage is applied, the image distance between each photographic subject and the corresponding lens area can be obtained by acquiring the initial focal length and the initial object distance before the voltage is applied. In the initial case before the voltage is applied, the parameters between the subject and the lens region still satisfy the gaussian equation.
That is, an initial focal length and an initial object distance between each photographic subject and the corresponding lens region may be obtained, where the initial focal length is a focal length of each lens region before a voltage is applied to the lens region, and the initial object distance is an object distance corresponding to the initial focal length. And calculating the image distance between each shooting object and the corresponding lens area according to the initial focal length and the initial object distance.
The calculation process of the initial focal length is the same as that in the PD focusing example, and is not described herein again. While the initial object distance can be obtained by the ietf technique or the laser focusing technique.
Therefore, in the embodiments, the target focusing focal length is determined by the target focusing object distance, and a reference basis is provided for focusing of each subsequent shooting object. The voltage applied to the final lens area can generate the change of the object distance between the shooting objects with different depths of field and the corresponding lens area, so that each lens area reaches the corresponding target focusing focal length, and finally each shooting object is focused clearly when the corresponding lens area is imaged, and the panoramic deep focusing can be realized.
Fig. 9 is a schematic structural diagram of a focusing apparatus according to another embodiment of the present disclosure, and as shown in fig. 9, the focusing apparatus 900 may include a camera module 910, which includes a liquid crystal lens; the focusing apparatus 900 further includes:
an obtaining module 920, configured to obtain target focusing parameters of at least two photographic objects with different depths of field;
a voltage applying module 930, configured to apply a voltage to the lens area corresponding to each object based on the target focusing parameter, where the applied voltages to the lens areas corresponding to the objects with different depths of field are different.
In the embodiment of the application, target focusing parameters of at least two shooting objects with different depths of field are obtained; based on the target focusing parameters, voltages are applied to the lens areas corresponding to the shot objects, wherein the voltages applied to the lens areas corresponding to the shot objects with different depths of field are different, so that the shot objects in the whole lens imaging interface are focused clearly, that is, full depth of field focusing can be realized, and the time for obtaining the panoramic deep picture is shortened.
In some optional examples, the voltage applying module includes:
the determining unit is used for determining a target focusing focal length of a lens area corresponding to each shooting object according to the target focusing parameter of each shooting object;
and the voltage applying unit is used for respectively applying voltage to the lens area corresponding to each shooting object so as to enable the focal length of each lens area to be positioned in a target focal length interval related to the target focusing focal length of each lens area.
In other alternative examples, at least one electrode is correspondingly arranged in each lens area; the voltage applying unit includes:
a determining subunit, configured to determine, according to the target focal length of each lens region, a voltage of each electrode provided for each lens region;
and the control subunit is used for controlling each electrode arranged in each lens area to apply voltage to the corresponding lens area.
In yet other alternative examples, the target focus parameter includes a target focus image distance between the photographic subject and the corresponding lens region;
the second determination unit includes:
an acquisition subunit configured to acquire an object distance between each photographic subject and the corresponding lens region;
and the calculating subunit is used for calculating the target focusing focal length of each lens area according to the target focusing image distance and the object distance between each shooting object and the corresponding lens area.
In still some optional examples, the obtaining subunit is specifically configured to obtain an initial focal length and an initial image distance between each photographic object and a corresponding lens region, where the initial focal length is a focal length of each lens region before a voltage is applied to the lens region, and the initial image distance is an image distance corresponding to the initial focal length; and calculating the object distance between each shooting object and the corresponding lens area according to the initial focal length and the initial image distance.
The focusing device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The focusing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The focusing device provided in the embodiment of the present application can implement each process implemented by the focusing method in the method embodiment of fig. 8, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 10, an electronic device 1000 is further provided in this embodiment of the present application, and includes a processor 1001, a memory 1002, and a program or an instruction stored in the memory 1002 and executable on the processor 1001, where the program or the instruction is executed by the processor 1001 to implement each process of the foregoing focusing method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1100 includes, but is not limited to: a radio frequency unit 1101, a network module 1102, an audio output unit 1103, an input unit 1104, a sensor 1105, a display unit 1106, a user input unit 1107, an interface unit 1108, a memory 1109, a processor 1110, and the like.
Those skilled in the art will appreciate that the electronic device 1100 may further include a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
It should be understood that in the embodiment of the present application, the input Unit 1104 may include a Graphics Processing Unit (GPU) 11041 and a microphone 11042, and the Graphics processor 11041 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The image capturing device is a camera which can comprise a camera module, and the camera module can also comprise a liquid crystal lens.
The display unit 1106 may include a display panel 11061, and the display panel 11061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1107 includes a touch panel 11071 and other input devices 11072. A touch panel 11071, also called a touch screen. The touch panel 11071 may include two portions of a touch detection device and a touch controller. Other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1109 may be used for storing software programs and various data including, but not limited to, application programs and an operating system. Processor 1110 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1110.
The processor 1110 may be configured to obtain target focusing parameters of at least two photographic subjects with different depths of field.
The processor 1110 is further configured to apply a voltage to a lens area corresponding to each of the objects through the interface unit 1108 based on the target focusing parameter, where the applied voltage is different for lens areas corresponding to the objects with different depths of field.
In yet another alternative embodiment, the processor 1110 may be configured to determine the target focus focal length of the lens region corresponding to each photographic subject according to the target focus parameter of each photographic subject.
The processor 1110 may be configured to apply a voltage to the lens region corresponding to each photographic subject through the interface unit 1108, so that the focal length of each lens region is located in a target focal length interval associated with the target focal length of each lens region.
In yet another alternative embodiment, at least one electrode is correspondingly arranged in each lens area; the processor 1110 may be configured to determine a voltage of each electrode disposed in each lens region according to the target focal length of each lens region, and then control each electrode disposed in each lens region to apply a voltage to the corresponding lens region through the interface unit 1108.
In yet another optional embodiment, the target focusing parameter includes a target focusing image distance between the photographic subject and the corresponding lens region;
a processor 1110 operable to obtain an object distance between each photographic subject and the corresponding lens region; and calculating the target focusing focal length of each lens area according to the target focusing image distance and the object distance between each shooting object and the corresponding lens area.
In yet another optional embodiment, the processor 1110 obtains an initial focal length and an initial image distance between each photographic object and the corresponding lens region, where the initial focal length is a focal length of each lens region before a voltage is applied to the lens region, and the initial image distance is an image distance corresponding to the initial focal length; and calculating the object distance between each shooting object and the corresponding lens area according to the initial focal length and the initial image distance.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the foregoing focusing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
An embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement the foregoing methodFocusingThe processes of the method embodiment can achieve the same technical effect, and are not described herein again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A focusing method is applied to a focusing device, and is characterized in that the focusing device comprises a camera module, the camera module comprises a liquid crystal lens, and the method comprises the following steps:
acquiring target focusing parameters of at least two shot objects with different depths of field;
and applying a voltage to the lens area corresponding to each shooting object based on the target focusing parameter, wherein the voltages applied to the lens areas corresponding to the shooting objects with different depths of field are different.
2. The method of claim 1, wherein applying a voltage to a lens region corresponding to each photographic subject based on the target focus parameter comprises:
determining a target focusing focal length of a lens area corresponding to each shooting object according to the target focusing parameters of each shooting object;
and respectively applying voltage to the lens area corresponding to each shooting object so as to enable the focal length of each lens area to be positioned in a target focal length interval related to the target focal length of each lens area.
3. The method of claim 2, wherein at least one electrode is disposed for each lens region; the applying of the voltage to the lens area corresponding to each shooting object respectively comprises the following steps:
determining the voltage of each electrode arranged in each lens area according to the target focusing focal length of each lens area;
and controlling each electrode arranged in each lens area to apply voltage to the corresponding lens area.
4. The method of claim 2, wherein the target focus parameters include a target focus image distance between the subject and the corresponding lens region;
the determining the target focusing focal length of the lens area corresponding to each shooting object according to the target focusing parameter of each shooting object includes:
acquiring an object distance between each shooting object and a corresponding lens area;
and calculating the target focusing focal length of each lens area according to the target focusing image distance and the object distance between each shooting object and the corresponding lens area.
5. The method of claim 4, wherein the obtaining the object distance between each photographic subject and the corresponding lens area comprises:
acquiring an initial focal length and an initial image distance between each shooting object and a corresponding lens region, wherein the initial focal length is the focal length of each lens region before voltage is applied to the lens region, and the initial image distance is the image distance corresponding to the initial focal length;
and calculating the object distance between each shooting object and the corresponding lens area according to the initial focal length and the initial image distance.
6. The utility model provides a focusing device, its characterized in that, focusing device includes the camera module, the camera module includes liquid crystal lens, the device still includes:
the acquisition module is used for acquiring target focusing parameters of at least two shooting objects with different depths of field;
and the voltage applying module is used for applying voltages to the lens areas corresponding to the shooting objects based on the target focusing parameters, wherein the voltages applied to the lens areas corresponding to the shooting objects with different depths of field are different.
7. The apparatus of claim 6, wherein the voltage applying module comprises:
the determining unit is used for determining a target focusing focal length of a lens area corresponding to each shooting object according to the target focusing parameter of each shooting object;
and the voltage applying unit is used for respectively applying voltage to the lens area corresponding to each shooting object so as to enable the focal length of each lens area to be positioned in a target focal length interval related to the target focusing focal length of each lens area.
8. The device of claim 7, wherein at least one electrode is disposed in each lens region; the voltage applying unit includes:
a determining subunit, configured to determine, according to the target focal length of each lens region, a voltage of each electrode provided for each lens region;
and the control subunit is used for controlling each electrode arranged in each lens area to apply voltage to the corresponding lens area.
9. The apparatus of claim 7, wherein the target focus parameter comprises a target focus image distance between the subject and the corresponding lens region;
the second determination unit includes:
an acquisition subunit configured to acquire an object distance between each photographic subject and the corresponding lens region;
and the calculating subunit is used for calculating the target focusing focal length of each lens area according to the target focusing image distance and the object distance between each shooting object and the corresponding lens area.
10. The apparatus according to claim 9, wherein the obtaining subunit is configured to obtain an initial focal length and an initial image distance between each photographic subject and the corresponding lens region, where the initial focal length is a focal length of each lens region before a voltage is applied to the lens region, and the initial image distance is an image distance corresponding to the initial focal length; and calculating the object distance between each shooting object and the corresponding lens area according to the initial focal length and the initial image distance.
11. An electronic device comprising a camera module, a processor, a memory, and a program or instructions stored on the memory and executable on the processor, wherein the camera module comprises a liquid crystal lens, and wherein the program or instructions, when executed by the processor, implement the steps of the focusing method of any one of claims 1-5.
12. A readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the focusing method as claimed in any one of claims 1 to 5.
CN202210119340.0A 2022-02-08 2022-02-08 Focusing method and device, electronic equipment and storage medium Pending CN114554085A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210119340.0A CN114554085A (en) 2022-02-08 2022-02-08 Focusing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210119340.0A CN114554085A (en) 2022-02-08 2022-02-08 Focusing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114554085A true CN114554085A (en) 2022-05-27

Family

ID=81673947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210119340.0A Pending CN114554085A (en) 2022-02-08 2022-02-08 Focusing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114554085A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086558A (en) * 2022-06-14 2022-09-20 Oppo广东移动通信有限公司 Focusing method, image pickup apparatus, terminal apparatus, and storage medium
CN115150553A (en) * 2022-06-27 2022-10-04 Oppo广东移动通信有限公司 Focusing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104144295A (en) * 2014-08-12 2014-11-12 北京智谷睿拓技术服务有限公司 Imaging control method and device and imaging equipment
CN105827922A (en) * 2016-05-25 2016-08-03 京东方科技集团股份有限公司 Image shooting device and shooting method thereof
CN106454116A (en) * 2016-11-18 2017-02-22 成都微晶景泰科技有限公司 Automatic full-focus imaging method and device
CN111308741A (en) * 2018-12-12 2020-06-19 电子科技大学 Small concave imaging device and imaging method based on liquid crystal lens
CN113141447A (en) * 2020-03-04 2021-07-20 电子科技大学 Full-field-depth image acquisition method, full-field-depth image synthesis device, full-field-depth image equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104144295A (en) * 2014-08-12 2014-11-12 北京智谷睿拓技术服务有限公司 Imaging control method and device and imaging equipment
CN105827922A (en) * 2016-05-25 2016-08-03 京东方科技集团股份有限公司 Image shooting device and shooting method thereof
CN106454116A (en) * 2016-11-18 2017-02-22 成都微晶景泰科技有限公司 Automatic full-focus imaging method and device
CN111308741A (en) * 2018-12-12 2020-06-19 电子科技大学 Small concave imaging device and imaging method based on liquid crystal lens
CN113141447A (en) * 2020-03-04 2021-07-20 电子科技大学 Full-field-depth image acquisition method, full-field-depth image synthesis device, full-field-depth image equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086558A (en) * 2022-06-14 2022-09-20 Oppo广东移动通信有限公司 Focusing method, image pickup apparatus, terminal apparatus, and storage medium
CN115086558B (en) * 2022-06-14 2023-12-01 Oppo广东移动通信有限公司 Focusing method, image pickup apparatus, terminal apparatus, and storage medium
CN115150553A (en) * 2022-06-27 2022-10-04 Oppo广东移动通信有限公司 Focusing method and device, electronic equipment and storage medium
CN115150553B (en) * 2022-06-27 2024-02-20 Oppo广东移动通信有限公司 Focusing method, focusing device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
JP7311418B2 (en) Focusing method, terminal and computer readable storage medium
EP3326360B1 (en) Image capturing apparatus and method of operating the same
CN109829863B (en) Image processing method and device, electronic equipment and storage medium
CN114554085A (en) Focusing method and device, electronic equipment and storage medium
CN112532881B (en) Image processing method and device and electronic equipment
CN103402058A (en) Shot image processing method and device
CN114445315A (en) Image quality enhancement method and electronic device
CN114390201A (en) Focusing method and device thereof
US20160292842A1 (en) Method and Apparatus for Enhanced Digital Imaging
CN113747067B (en) Photographing method, photographing device, electronic equipment and storage medium
CN111866378A (en) Image processing method, apparatus, device and medium
CN113866782A (en) Image processing method and device and electronic equipment
CN113542600A (en) Image generation method, device, chip, terminal and storage medium
CN112929563A (en) Focusing method and device and electronic equipment
CN106303202A (en) A kind of image information processing method and device
CN112291473A (en) Focusing method and device and electronic equipment
CN112672058B (en) Shooting method and device
CN112261262B (en) Image calibration method and device, electronic equipment and readable storage medium
CN112653841B (en) Shooting method and device and electronic equipment
CN114244999B (en) Automatic focusing method, device, image pickup apparatus and storage medium
CN114245018A (en) Image shooting method and device
CN114390189A (en) Image processing method, device, storage medium and mobile terminal
CN110771147A (en) Method for adjusting parameters of shooting device, control equipment and shooting system
CN114025100B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113873160B (en) Image processing method, device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination